˙3000 3000  2000  1500 1250 ˙3000 3000  2000  1500 1250˙3000 3000  2000  1500 1250˙3000 3000  2000  1500 1250˙3000 3000  2000  1500 1250˙3000 3000  2000  1500 1250˙3000 3000  2000  1500 1250˙3000 3000  2000  1500 1250

Computer Algebra

MSE

Matthias Meyer

Exam summary
Switzerland
20.1.2023

Summary created by Matthias Meyer. Note there might be errors!

Contents

 1 Newton Polynomische Interpolation
  1.1 Collocation
  1.2 Aitken-Neville Rekursionsformel
  1.3 Newton basis polynomials
  1.4 Example: Collocation polynomial
  1.5 Further information
  1.6 Problems
   1.6.1 Error Calculation example
 2 Chebyshev arguments
   2.0.1 Exercise 2 and 3 of excercise sheet 2 are important for the exam!
 3 Hermite Interplation (Osculation)
  3.1 Example: Hermite Interpolation and Error Callculation
  3.2 Example 2
 4 Multi-variable Polynomial Interpolation
  4.1 Example: Multi-variable polynomial Interpolation
   4.1.1 Alternative Method:
  4.2 Questions
 5 Spline Interpolation
  5.1 Idea
  5.2 Cubic Spline
   5.2.1 Solve problem
   5.2.2 Natrual Splining
   5.2.3 Formulas for the cubic clamped spline interpolation S
   5.2.4 Example Natural Spline
   5.2.5 Example
  5.3 Bernstein-Bézier Splines (B-B-Splines)
   5.3.1 Bernstein Polynomial
   5.3.2 Simple Bézier Curves
   5.3.3 Composite Bézier Curves
   5.3.4 Example: Composite Bézier Curves
   5.3.5 Properties
   5.3.6 Casteljau recurrence
   5.3.7 Example
 6 Linear Least-Squares approximation
  6.1 Idea
  6.2 Linear Least-Squares
   6.2.1 Thinking hint
   6.2.2 Normal equations
  6.3 Singular-value decomposition (SVD)
   6.3.1 Idea
   6.3.2 Uniform arguments and orthogonal polynomials
   6.3.3 Calculation of the first terms for orthogonal polynomials:
   6.3.4 Exercise one, least square parabola
   6.3.5 Excercise three, Savitzky Golay filter
   6.3.6 Exercise four, orthogonal polynomials
   6.3.7 Exercise five, singular value decomposition
  6.4 Chebyshev polynomials
   6.4.1 Idea
   6.4.2 Definition
   6.4.3 Properties
   6.4.4 Usage
  6.5 Continuous Chebyshev approximation
  6.6 Continuous Least-Square Legendre approximation
   6.6.1 Legendre continuous least square parabola
  6.7 Mulit-variate least-square
   6.7.1 Example one
   6.7.2 Example three
   6.7.3 Example six
   6.7.4 Example seven
 7 Differentials, Taylor formulas and Jacobian
  7.1 Differential
   7.1.1 Definition
  7.2 Taylor
   7.2.1 Example
  7.3 Jacobian matrix and determinant
   7.3.1 Estimating navigation error by inversion of Jacobian determinant
   7.3.2 Example three
   7.3.3 Example one
 8 Ordinary differential equations
  8.1 Definition
  8.2 Explicit methods
   8.2.1 Euler method
   8.2.2 Error Calculation
   8.2.3 Example
  8.3 Explicit Runge-Kutta Methods
   8.3.1 Example
  8.4 Butcher tableau
  8.5 Step-size adaption
   8.5.1 Idea
   8.5.2 Stability of explicit methods
   8.5.3 Exercise adaptive step size
   8.5.4 Exercise Stability polynomial
   8.5.5 Stiffness
   8.5.6 Exercise stiffness detection test
   8.5.7 Van der Pol second-order differential equation
 9 Fromulas
  9.1 Differentation Formulas
  9.2 Integration Formulas
  9.3 Table of Indefinite Integrals
   9.3.1 Basic Functions
   9.3.2 Products of ex and cox x and sin x
   9.3.3 Product of Polynomial p(x) with lnx,ex,cosx,sinx
  9.4 Taylor Polynomial/Series
   9.4.1 Important Taylor Series
  9.5 Determinant
   9.5.1 Sarrus
  9.6 Matrix
   9.6.1 Transpose
   9.6.2 Multiplication

Some parts are also available on the following webpage.

1 Newton Polynomische Interpolation

1.1 Collocation

Collocation = All measurement points are represented by a function, for example a polynomial. The polynomial

y(x) =p(x) = c0+ c1x1+ c2x2 + ...+ cmxm   mit  y (xk)= p (xk) = yk  (k= 0,1,...,n)

results in a linear equation system with a degree of n+1.

1.2 Aitken-Neville Rekursionsformel

One way to solve this is by searching a polynomial formula as can be seen below. (It would also possible to generate a function by connecting the different data points with a straight line. On a Computer this would take a lot of computational effort, since one would have a lot of if and else statements)

y(x) = p(x)= c0+ c1x1 + c2x2 + c3x3+ ···+ cm −1xm −1+ cmxm   (c0,c1,...∈ R)

To get the solution of this polynomial, one has to solve the following equation system:

             1      2       3             m−1       m
y0 = c0+ c1x0 + c2x0 + +c3x 0+ ···+ cm −1x0    + cmx0
y  = c + c x 1+ c x 2+ +c x  3+ ···+ c   x  m−1+ c  x m
  1   0   1 1    2 1     3  1        m−1  1      m  1
...

yn = c0+ c1xn1 + c2xn2 + +c3xn3 + ···+ cm−1xnm −1 + cmxnm

As one can see, this equation system gets really large and could be difficult to be solved on a microcontroller. But there exists a nice algorithm which makes solving this system easier, called Aitken-Neville recursion which divides the huge equation system in little parts.

                                  green                   red
                               ︷---︸︸---︷            ︷---︸︸---︷
p(x) = p0,1,2,...,n−1,n(x)=  (x-−-x0)p1,2,...,n−1,n(x)−-(x-−-xn)p0,1,2,...,n−1(x)-
                                         (xn − x0)

The formula above shows how the global interpolating polynomial is combined from the partial interpolation polynomials p0,1,2,…,n1(x)and p1,2,…,n1,n(x). On this partial interpolation polynomials, one can again apply the formula until one ends up with only two datapoints.

1.3 Newton basis polynomials

Since the calculation with the Aitken-Neville recursion is quite tedious. Newton came up with another basis polynomial πk(x) with k= 0,1,2,...,n (Aitken-Neville recursion used (1,x1,x2,....,xm)

π0(x) = 1
π1(x) = (x − x0)
π  (x) = (x − x )(x− x )
  2          0      1
...

πk(x) = (x− x0)(x− x1)···(x − xk−1)
..
.
πn(x) = (x− x0)(x− x1)···(x− xn−1)
(1)

Which results in the final polynomial which can be seen in Equation 2

p(x)= a0π0(x) + a1 π1(x)+ a2π2(x)+ ···+ am πm(x)
(2)

When one now writes the equation system one sees that this system is much easier to solve:

 y0= a0
 y = a  + a (x − x )
  1   0    1  1   0
 y2= a0 + a1 (x2− x0)+ a2(x2− x0)(x2− x1)
   .
   ..
yn = a0 + a1 π1(xn)+ a2π2 (xn) + ···+ anπn (xn)

When one also applies the Aitken-Neville recursion it even get easier and independent of the order, since one just calculates divided differences. Below one can see the calculation of the first ak terms.

|--------------------|-------------------|
|k=-0----------------|y-(x0)-------------|
|k= 1 :y(x0,x1)      | y(x1)−y(x0)         |
|--------------------|-y((xx11−,xx20)−)y(x0,x1)-----|
|k=-2-:y(x0,x1,x2)---|----(x2−x0)---------|
|k= 3 :y(x0,x1,x2,x3)| y(x1,x2,x3)−y(x0,x1,x2) |
----------------------------(x3−x0)--------

Where y(x0,x1,...,xk) is called the divided difference.

                y (x1,x2,...,xk)− y(x0,x1,...,xk−1)
y(x0,x1,...,xk)= ---------------------------------  (k = 0,1,...,n)
                             (xk − x0)

When the points have the same distance to each other, the formula gets even easier:

                   k
y (x0,x1,...,xk)= ∆--y0
                 hkk!

1.4 Example: Collocation polynomial

Given is the following dataset:

                        {(     )                 }
{(0,1),(1,1),(2,2),(4,5)}=   xk,yk  |k= 0,1,...,n = 3

Calculate the collocation polynomial.

One can do the calculation the following way:

x0  y0
        ∆y0
x1  y1       ∆2y0
        ∆y1         ∆3y0
x   y        ∆2y          ∆4y
 2   2           1   3        0
        ∆y2    2    ∆ y1
x3  y3       ∆  y2
        ∆y3
x4  y4

and therefore get the following result for the given data points:

---x--|-y---|---π1----------π2---------π3-----
 ︸︷0︷︸ |︸1︷︷︸ |
  x0  | a0  |
      |     |11−−10 = ︸︷0︷︸
      |     |      a1
      |     |           1−0    1-
 ︸︷1︷︸ | 1   |           2−0 =  2
  x1  |     |                 ︸︷a︷︸
      |     |                   2   1−1
      |     | 22−−11 = 1               64−20-=− 112
      |     |            -32−1  1
   2  | 2   | 5−2   3     4−1 = 6
      |     | 4−2 = 2
   4  | 5   |

According to Equation 2 one gets then the following result:

                        1-       −1-
y(x)= p(x) = 1+ 0π1(x)+ 2 π2(x)+ 12 π3(x)=
            1               −1
1+ 0(x− 0)+ --(x − 0)(x − 1)+ ---(x − 0)(x − 1)(x − 2)
            2               12

1.5 Further information

It does not depend on which data point one uses first and which one as last element. The resulting formula might look different (different ak coefficients, but the last one is the same), but the result is exactly the same. Furthermore, the data points must not have the same spacing.

1.6 Problems

With a lot of data point the runge phenomenon occurs (oscillations with high frequencies and amplitudes towards the boundaries of the arguments range) To calculate the error which can be seen one can use the following formula:

              (n+1)                                       (n+1)
y(x)− p(x)=  f----(ξ)(x− x )(x −x  )···(x− x   )(x −x  )= f-----(ξ)π    (x)
             (n + 1)!      0       1        n−1       n    (n+ 1)!  n+1

Where ξ is a new data point.
f-(n+1)(ξ)
 (n+1)! can also be substituted by C=f(n+1)(ξ)
 (n+1)!
It is a newton polynomial with one more argument.   (n+1)
f----(ξ)
 (n+1)! is a higher order derivative which we do not know at the moment. The formula can also be rewritten in Equation 3:

        (n+1)
y(x)= f-----(ξ)πn+1(x)+ p(x)
      ︸(n-+︷︷ 1)!︸
          C
(3)

When generalizing it one can also write Equation 4

             d’thderivative
                ︷︸︸︷
             ---y(d)---(ξ)       d0       d1          dn
y(x) −p(x) =      d!      (x −x0)  (x −x1)  ···(x − xn)      (d = d0+ d1+ ...dn)
x,ξ ∈(min x ,max x )
           i      i i=0,1,...,n
(4)

1.6.1 Error Calculation example

The model fucntion y(x) =sin(1
2πx) has to be interpolated using the arguments x= 0;1;2 by a quadratic polynomial p(x), what is the error at the position x=1
2

   |             |
 x |     y       |   π1         π2     π3
---|----1--------|-------------------------
 0 |sin(2π0) = 0 | 1−0
   |             | 1−0 = 1
 1 |sin(12π1) = 1 |          −12−−10 = −1
   |             |0−1 = −1
 2 |sin(1π0) = 0 |2−1
   |    2        |

Error = y(x) p(x) =sin(1
2πx)     2
(︸−x-︷+︷-2x)︸0+1(x0)+1(x0)(x1)

2 Chebyshev arguments

To reduce the error mentioned above to a minimum one can use a chebyshev distribution of the arguments.

        ( 2k + 1  )
xk = cos --------π   (k= 0,1,...,n)
         2(n + 1)

2.0.1 Exercise 2 and 3 of excercise sheet 2 are important for the exam!

3 Hermite Interplation (Osculation)

When the problem of collocation, which can be found on the following post is extended by the requirement that certain given values of derivatives of order 0 up to some higher order k of the model function y must be met at some of the arguments x0,x1,...,xn we end in an interpolation problem called osculation or Hermite interpolation. Furthermore note that one must use the modified newton polynomials as they can be seen in Equation 5, for the example provided there

3.1 Example: Hermite Interpolation and Error Callculation

We have railway track with the following points given:(0;0),(2;1),(4;2) and also its derivatives. Now we have to search a polynomial p2 which goes through point one and point two and fulfils it’s derivatives. So we have the following conditions: p2(2) =1,p2(2) =1,p2′′(2) =0 and p2(4) =0,p2(4) = 2,p2′′(4) =0

      |    |
  x   | y  |         y ′               y′′/2!      y′′′/3! y ′′′′/4! y′′′′′/5!
--2---|-1--|-------------------------------------------------------------
 ︸︷︷︸ |︸︷︷︸|
  x0  | a0 | y(1)(x )
      |    | --1!0-=    ︸︷1︷︸
      |    |         a1(givenbyex.)
  2   | 1  |                      1-·     0
 ︸︷x︷︸ |    |                      2!     ︸︷︷︸
   0  |    |       (1)                a2(given by ex.)  −1
      |    |      y-(1x!0)= 1                        ---
      |    |                                       ︸8︷︷︸
      |    |                                        a3
      |    |                             1                 -1-
 ︸2︷︷︸ | 1  |                            −4                 16
  x0  |    |                                              ︸︷a︷4︸
      |    |       2−1=  1                          0               0
      |    |       4−2   2           0−1
 ︸4︷︷︸ | 2  |                         4−22-= −14              116
  x1  |    |
      |    |      y(1)(x1)                            1
      |    |        1!  = 0           (2)            8
 ︸4︷︷︸ | 2  |                         y-(2x!1)-= 0
  x1  |    |
      |    |      y(1)(x1)
      |    |        1!  = 0
 ︸4︷︷︸ | 2  |
  x1  |    |

This lead to the following result: p2 =1 ·π0 +1 ·π1 + 0 ·π2 18π3 +116π4 when using the modified newton polynomials as they can be seen in Equation 5.

π0 = 1
π  = (x − x )= (x− 2)
 1        0
π2 = (x − x0)(x− x0)= (x− 2)2
                                  3
π3 = (x − x0)(x− x0)(x− x0)= (x −2)
π4 = (x − x0)(x− x0)(x− x0)(x− x1)= (x −2)3(x −4)
                                                 3     2
π5 = (x − x0)(x− x0)(x− x0)(x− x1)(x −x1) = (x − 2)(x − 4)
(5)

The problem with this method it is that it is not guarateed to find a solution for this problem, when not all derivatives are given. One then has to increase the order of the polynomial.

Error With Equation 4 the error is then:

y(6)(ξ)
------(x − 2)3(x −4)3
   6!

The maximum error therefore is: (    ∣∣ (6)  ∣∣)
 max  y  (ξ)·(    ∣∣     3      3∣∣)
max  (x − 2)(x −4)·1-
6!

3.2 Example 2

Compute two fourth-degree (!) polynomials, p1(x) and p2(x), meeting the constraints below:

p1(0) =p ′1(0) = p′1′(0)= 0 and p′1′(2)= 0

and

p  (4) = 2,p ′(4) = 0,p′′(4)= 0and  p′′(2)= 0
  2        2        2            2

Moreover, the two polynomials should meet smoothly at the point (2,1) without a crinkle (with a common tangential line) To solve this problem we introduce a new variable called a which defines the first derivative at point (2,1). When a is equal in both equations the meeting of the two polynomials is very smoothly. The first scheme looks like this:

   |  |
x---y-----y′----y′′/2!--y′′′/3!--y′′′′/4!--y′′′′′/5!--
 0 |0 |
   |  |
   |  |   0     -1
 0 |0 |         2! ·0
   |  |   0              18
 0 |0 |           1             1-
   |  |1−0 = 1    4     |B|     16      |F|
   |  |2−0   2    --     --     --      --
 2 |1 |          |A|     --    |D-|
   |  |   a             |C|
 2 |1 |         -1·0
   |  |   a     2!
   |  |
 2 |1 |

Where

        1
--   a−-2-  2a-−-1
|A-|= 2− 0 =    4
--   --   1
|B-|= |A-|−-4= 2a-−-2 = a−-1-
      2− 0      8       4
--   |B|− 1   2a− 3
|D-|= -----8 = ------
      2− a-    16
--   0−-|A|  1-−-2a
|C-|=  2− 0 =    8
     --   --
|E-|= |C-|−|B|-= 3−-4a-
--   -2 −c--    16
--   |E-|− |D|   3− 3a
|F-|= --------= ------
      2− a      16

since p1 must be of order 4 se conclude that |F|= 0(!) and therefore a= 1

               1     1            1     1
p1 = 0+ 0+ 0 + -x3− ---x3(x− 2)=  -x3− ---x4
               8    16            4    16

Now one can do the same thing for the next polynomial but a is this time known. When solving it one gets the following result:

              3
p2 = 1-x4 − 3x--+ 3x2 − 4x+ 2
     16      4

4 Multi-variable Polynomial Interpolation

The polynomial interpolation can also be used with a Multi-variate Polynomial Interpolation. Where the polynomial is represented by Equation 6.

p(x,y) = a0,0π0(x)π0(y)+ a1,0π1(x)π0(y)(+ a0,1)π0(x)π1(y)+ a1(,1π1(x))π1(y),
p(x,y) = a0,01·1+ a1,0(x − x0)1+ a0,11 y− y0 + a1,1(x − x0) y− y0
(6)

4.1 Example: Multi-variable polynomial Interpolation

p(x,y) p(0,0) =0,p(1,0) =1,p(0,1) = 0,p(1,1) = 0.5 Now lets calculate the first x row:

    |    |
-x--|-z--|---y′---
 0  | 0  |
    |︸︷︷a︸0 |
    |    | 1−0-= 1
    |    | 1−0
 1  | 1  |

p(x;y0 = 0)= 0+ 1·(x −0)

Now lets calculate the second x row:

   |     |
 x | z   |    y′
-0-|-1---|-----------
   |︸︷︷︸ |
   | a0  |0.5−1-   1
   |     | 1−0 = −2
 1 | 0.5  |

p(x;y  = 1)= 1+ −1-·(x −0)
      1           2

And in step 3 we combine those two.

-y-|--z----|-------y′--------
 0 | ︸x︷︷︸  |
   |  a0   |   1
   |       |(1−-2x)−x-= 1− 3x
   |       |  1−0     ︸-︷2︷-︸
   |       |            a1
 1 |1− 1x  |
       2

                  3               1
p(x,y)= x ·1+ (1− -x) ·(y − 0)= x− --·x·y
                  2            ---2------

4.1.1 Alternative Method:

p(x,y)= a0,0π0(x)π0(y)+ a1,0π1(x)π0(y) + a0,1π0(x)π1(y)+ a1,1π1(x)π1(y)
p(0,0)= a0,0·1= 0 ⇒ a0,0= 0
︸ ︷=︷0 ︸

p︸(1︷,︷0)︸= a0,0·1+ a1,0·︸x︷︷︸ = 1⇒  a1,0 =1
  =1                  =1
p(0,1)= a   ·1+ a   · y  = 0⇒  a   =0
︸-︷︷-︸   0,0     0,1 ︸︷︷︸        0,1
  =0                  =1
p(1,1)= a0,0·1+ a1,0 ·︸︷x︷︸ +a0,1· y  +a1,1·︸x︷︷︸· y  = 0⇒  a1,1 = −0.5
︸-︷︷-︸  ︸-︷︷-︸       =1        ︸︷︷︸       =1  ︸︷︷︸
 =0.5     =0    ︸--︷︷--︸  ︸---︷︷-=1︸            =1
                   =1        =0
              1-
p(x,y)= 1-·x−-2x-·y
        -----------

4.2 Questions

How many conditions are generally required

5 Spline Interpolation

5.1 Idea

The Idea of the spline interpolation is that one does not interpolate the data with a high degree polynomial, but with multiple polynomials of lower degree. Due to that the runge phenomenon does not occur which occurs for high deggre interplations. When following this approach the transition of one spline ot the next must be considered. Normally one says that the derivative up ot an order n must be the same from one to the next spline. The drawback of this approach is that one needs a lot of storage, since one needs to store a lot of fucntions. The advantage is that it is easier to calculate, since the newton interpolation has a complexity of n2 whereas a cubic spline interpolation has a complexity of n.

5.2 Cubic Spline

5.2.1 Solve problem

A spline can be described with Equation 7

                               2           3
Si(x)= ai + bi (x − xi)+ ci(x − xi)+ di (x − xi)
      S ′= b + 2c (x− x )+ 3d (x −x )2
        i   i ′′  i     i     i      i
            S i = 2ci+ 6di(x −xi)
(7)

5.2.2 Natrual Splining

In natural splines the energy is minimized, therefore y0′′= 0 =yn′′(    ∫ ∣∣ ′′   ∣∣2   )
 min    f (x)  dx

⌈                                                                 ⎤  ⎛    ⎞   ⎛  (y2−y1  y1−y0) ⎞
  2(h0+ h1)     h1                                                     c1     ⎜ 3( -h1-− --h0-) ⎟
⎢    h1      2(h1+ h2)     h2                                     ⎥  ⎜ c2 ⎟   ⎜    y3−y2- y2−y1  ⎟
⎢⎢                                                                 ⎥⎥  ⎜⎜    ⎟⎟   ⎜⎜ 3(  h2 −   h1 ) ⎟⎟
⎢⎢               h2      2(h2+ h3)        h3                       ⎥⎥  ⎜⎜ c3 ⎟⎟   ⎜⎜ 3  y4−y3− y3−y2  ⎟⎟
⎢⎢               ...        ...           ...                       ⎥⎥ ·⎜⎜  ... ⎟⎟ = ⎜⎜     h3 .   h2   ⎟⎟
⎢⎢                                                                 ⎥⎥  ⎜⎜    ⎟⎟   ⎜⎜ (      ..       )⎟⎟
⎢                                                                 ⎥  ⎜    ⎟   ⎜⎜3 yi+1−yi− yi−yi−1 ⎟⎟
⌊                         hn −3    2(hn−3 + hn −2)     hn −2      ⎦  ⎝cn−2⎠   ⎝ (  hi      hi−1 )⎠
                                        hn−2       2(hn −2+ hn−1)      cn−1     3 yi+h1−yi− yi−hyi−1
                                                                                    i       i−1
(12)

Error Calculation for cubic splines with C2

                  ∣∣ (4)  ∣∣   4
|y(x)− S(x)|≤ max  y---(x)--5H--= max ∣∣y(4)(x)∣∣-5--H4
                   ∣ 4!   1∣6                384
∣           ∣      ∣y(4)(x)∣         ∣      ∣ 1
∣y ′(x)− S′(x)∣≤ max --------H3 = max ∣y(4)(x)∣--H3
∣            ∣      ∣ 4!   ∣                24
∣y ′′(x) − S′′(x)∣≤ max ∣y(4)(x)∣3H2   x∈ [x0,xn],  H =   max   hi
                            8                      i=0,...,n −1
(13)

5.2.3 Formulas for the cubic clamped spline interpolation S
⌈                                                                      ⎤
  2h0     h0                                                              ⎛    ⎞  ⎛         (         )       ⎞
⎢⎢ h0   2(h0+ h1)      h1                                               ⎥⎥    c0             3 a1−ha0− y′0
⎢⎢         h1      2(h1 + h2)     h2                                    ⎥⎥  ⎜⎜ c1 ⎟⎟  ⎜⎜             0.            ⎟⎟
⎢⎢                     h      2(h  + h )       h                        ⎥⎥  ⎜⎜ c  ⎟⎟  ⎜⎜ ( (          ..   ))       ⎟⎟
⎢                      2        2    3         3                       ⎥ ·⎜  2. ⎟ =⎜  3 yi+1−yi− yi−yi−1         ⎟
⎢⎢                     ...         ...           ...                      ⎥⎥  ⎜⎜  .. ⎟⎟  ⎜⎜      hi      hi−1   i=1,...n−2⎟⎟
⎢⎢                                                                      ⎥⎥  ⎜⎝cn−2⎟⎠  ⎜⎝              ...            ⎟⎠
⎢⌊                                                                      ⎥⎦              yn−an−1-   an−1−an−2     ′
                                hn−3    2(hn −3+ hn−2)      hn −2          cn−1     9  hn−1  − 6  hn−2  − 3yn
                                            2hn −2      4hn−2 + 3hn −1
(14)

5.2.4 Example Natural Spline

Calculate the natural cubic spline-interpolation for a sine function sin(x) in the interval [0] according to the points {0,π-
2}. Also calculate the maximum error.
From Equation 7 one knows that

                              2          3
S0(x) = a0+ b0(x− 0)+ c0(x− 0) + d0(x− 0)
                  π-         π-2         π-3
S1(x) = a1+ b1(x− 2 )+ c1(x − 2 )+ d1(x − 2)

Furthermore c0 = 0 (Equation 8) since we use natural splines and a0 =y0 = 0,a1 =y1 = 1 (Equation 9). From Equation 13 one can write down the following equations:

                   (y2 −y1   y1− y0 )
(2(h0+ h1))·(c1)= 3 -------− -------
( (      ))        (  h1       h)0
 2 π-+ π-  ·(c ) =3  0−-1-− 1−-0-
    2  2     1        π2-    π2-
        (−12 )       −6
(2π)= 3  ---- ⇒  c1= ---
          π      ----π2--

From Equation 10 one knows that

b =  y1−-y0− 2c0-−-c1· π-= 2-+-2-+ π-= 3-
 0    π/2       3     2   π   π2   2   π-

From Equation 11 one knows that

d0=  c1−-c0= −6-·--2- = −4-
      3· π2   π2  3 ·π   π3--

     c2−-c1   4--
d1 =  3· π- =-π3
         2

And finally from Equation 10 that:

    y2 − y1  2c1− c2  π   2   2
b1= -------− --------·--= --+ --= 0-
      π/2       3     2   π   π

The error can then be estimated with Equation 13.

             sin(ξ)
            ︷--︸4︸-︷    (π)4
            |y--(ξ)|  5·-2---    -5·π4--
|y− s|≤ m[a0x,π]   4!  ·   16  = 1-·384·16-

5.2.5 Example

The data below was generated by the sine function. In this example, the natural and clamped spline as well as the max error are calculated.

-------------------------
|x  |0   π/3   2π/3   π |
|-i-|---⎷------⎷--------|
-yi--0----3/2----3/2--0--

natural Spline

(          )(    )   (  (a−a    a −a ))   (   ⎷ - )
  223π   π3-    c1      3 (-2h-1− -1h-0)      −92⎷π3
   π3-  22π3    c2   =  3  a3−ah2-− a2−ha1   =   −9--3
                                              2π
(15)

=⇒  c1= −27-⎷3-;c2= −27-⎷3--
         10π2        1⎷0π2
        c1-−c0   −27---3     c2-−c1
=⇒  d0=   (π-) =  10π3  ;d1=   3·h  = 0(??)
         3  3    ⎷ --
        −c2-   27--3-
=⇒  d2=  3h =  π310 (??)
                               ⎷ --
=⇒  b = a1-−-a0− 2c0-+ c1-·h = 9-3(??)
     0     h        3         5⎷-π
        a2 − a1  2c1 + c2    9 3
=⇒  b1= -------− --------h=  ----(??)
           h        3    (  )10π   ⎷ --
        a3-−-a2    π-     π- 2  −9---3
=⇒  b2=    h   − c23 − d2  3  =  10 π  (??)

           ⎷ --                    ⎷ --
          9--3                2  27--3-     3
S0(x)= 0+  5π (x − 0)+ 0(x − 0) − 10π3 (x− 0) (Equation 7)
(       π)
 0≤ x ≤ --
       ⎷3--   ⎷--(     )    ⎷ --(     )    (     )
       --3  9--3     π-   27--3-    π-2        π- 3
S1(x)=  2 +  10π  x− 3  − 10 π2 x − 3   + 0 x− 3
(π       2π)
 --≤ x ≤ ---
 3     ⎷ 3-  ⎷ --(      )    ⎷ --(      )     ⎷ --(      )
       --3  9--3     2π-   27--3-    2π- 2  27--3-    2π- 3
ρ2(x) =  2 −  10π  x−  3  −  10π2  x−  3   + 10 π3  x−  3
(          )
 2π-≤ x ≤ π
  3

clamped Spline

                                              (         )         ⎛             ⎷ -              ⎞
⎛ 2· π      π-             ⎞⎛ c0 ⎞   ⎛      3  y1−y0-− y′0       ⎞               92π3−⎷-3
⎝  π-3  2·(π3+ π-)     π-   ⎠⎝ c  ⎠ = ⎝     3(y2−hy1− y1−y0)     ⎠ =⎜⎜             −9--3            ⎟⎟
   3       3 π-3    π-3 π-     1        (y3−y2)h  (y2−hy1)    ′    ⎝   (−⎷3·3)     2π    −27⎷3    ⎠
          2 ·3    4 3 + 33    c2       9   h   − 6   h   − 3y3       9 -2π-- − 6·0+ 3 = -2π--+ 3
(16)

                 ⎷ --          ⎷ --            ⎷ --
        −10π + 18  3     2 π− 9  3       2 π− 9  3
=⇒ c0 = -------2----;c1= -----2---;  c2= -----2---
            2π            ⎷ 2π              2π
        c1−-c0   1-12π−-27--3-     c2−-c1   1-
=⇒ d0 =   3ħ   = π    2π2    ;d1 =   3h   = π ·0 = 0(??)
        (y − y           )          27 ⎷3-− 12π
=⇒ d2 =  -3---2-− c2h − b2 /h2 = ...= ----------- (?)
           h                  ⎷ --      2π3  ⎷--           ⎷ --
        y1−-y0-  2c0+-c1-   3---3  −18-π-+ 27-3- π-  6π-−-0--3
=⇒ b0 =   h    −    3   h = π  2 −     2π2·3    ·3 =   2π ·3  = 1 (??)
                                      ⎷ --            ⎷--
=⇒ b1 = y2−-y1-− 2c1+-c2h = 0− 6π-−-27--3· π-= −2-π-−-9-3-(??)
          h         3            2π2⎷·3-   3       6π
                               1   3  3
=⇒ b2 = b1+ 2c1h + 3d1h2 =...= --− ----(??)
                               3    2π

For the two examples above (natural and clamped) give maximum estimations for the following error quantities y(x) S(x), y(x) S(x) and y′′(x) S′′(x). The osculation error of a cubic spline can be calculated with ?? and the osculation error of a periodic cubic spline (                      )
y0= yn ⇒ S (x0)= S(xn)) with Equation 13.,

  |y −s |≤ max |y(4)(x) |·-5-H4   (H = max hi)
                        384  ( )4
         = max |sin(x)|·-5--· π-
                        384   3
             -5-- π4-
         = 1·384 ·81 ≈ 0.01565
 ∣      ∣                 3
 ∣y′− S′∣≤ max |y(4)(x) |H--= 1·---π-- ≈ 0.047849
                        24     27 ·24
∣ ′′   ′′∣        (4)    3  2     3π2
∣y  −S  ∣⩽ max |y   (x) |-H  = 1 ·----≈ 0.411234
                        8        8 9

5.3 Bernstein-Bézier Splines (B-B-Splines)

The bernstein-Bézier splines should give the same result as the cubic splines mentioned in the previous chapter. The difference is that one does not get a single formula in the end, but different data points. A good explanation can be found in the following video. But first of all to understand bézier curves/spline one must be familiar with Bernstein polynomials and therefore with the binomial coefficient (see also Equation 17)

(   )
  n     ---n!----
  k   = k!(n − k)!
(17)

(((((((((((((((((((((((((((((( ))))))))))))))))))))))))))))))
RRRRRRRRo00o1011o202122o30313233o4041424344o505152535455o60616263646566o2022wwwwwwwwnnn 01234562:::::::n:

Figure 1: Pascal’s triangle formula

1111211331146411511511612161172332711825752811938118391114122214111151344315110050515518606864224605215125015636636516600200502205

Figure 2: Pascal’s triangle numbers

Example: (For what can the binomial coefficients be used?) What is the fourth term of (3x4y)6 (note there is no zeroth term, therefore we subtract one in the equation blew)? The result can also be read from Figure 2.

(      )
  6     = --------6!--------= 20
  4− 1    (4 −1)!(6− (4 − 1))!

Therefore, the fourth term is:

   (                    )      (              )
20· (3x)6−(4−1)·(−4y)(4−1) = 20· 27·x3 ·(−64)y4  = −34560x2y4

This is much easier than to really calculate the polynomial. An explanation can also be found in the following video

5.3.1 Bernstein Polynomial

The bernstein polynomial is defined in Equation 18.

        (   )
          n        n−i i
Bin(t)=   i  (1− t)   t   t ∈ [0,1] (i = 0,1,···,n)
(18)

In Equation 18 one had an interval [0,1]. When our data is in another range like [a,b] one has to use Equation 19

                (u−a)     1  ( n  )      n−i      i
Bin(u,a,b)= Bin  b−a- = (b−a)n- i   (b − u)   (u− a)   u ∈[a,b]  (i = 0,1,···,n)
(19)

Furthermore note that those polynomials look like one can see in Figure 3.

PIC

Figure 3: Plot of Bernstein polynomial functions up to degree 4 with summation of all four functions to show characteristic of partition of one (Note the maximum of each polynomial is always at t=i-
n)

The polynomials relate to each other, as one can see in Equation 20.

d--B  (t)= n (B       (t)−B     (t))=  −n ∆B       (t)
dt  i,n         i−1,n−1      i,n−1           i−1,n −1
d2                  (                                 )
--2Bi,n(t)= n(n −1) Bi−2,n−2(t)−2Bi −1,n−2(t)+ Bi,n−2(t) = n(n −1)∆2Bi −2,n−2(t)
dt
dk--            k                     k
dtk Bi,n(t)= (−1) n(n − 1)...(n −k + 1)∆ Bi−k,n−k(t)
(20)

5.3.2 Simple Bézier Curves

A simple Bézier curve is defined with Equation 21. To get the idea also have a look at Figure 4.

      ∑︁n
⃗r(t)=    ⃗PiBin(t)  t ∈ [0,1]
      i=0
(21)

Where:

5.3.3 Composite Bézier Curves

The simple Bézier curves meet at common control points. Which is a continuity condition (C0), but often higher (smoothness) conditions are required (Ck-smooth). This condition is only met if and only if Equation 22 is given.

∆ ℓ⃗Pn−ℓ,j   ∆ ℓ⃗P0,j+1
-----ℓ-- = ---ℓ----  (j = 0,1,···,m − 2 ℓ= 0,1,2,···,k)
   h j       hj+1
(22)

Writing out Equation 22 for C1 smoothness results in Equation 23, whereas C2 smoothness results in Equation 24 where one has to know that also Equation 23 C1 smoothness must be met.

n(⃗P   − ⃗P     )  n (⃗P    − ⃗P     )
---n,j---n−1,j-= ----1,j+1---0,j+1--  (j = 0,1,···,m − 2)
      hj               hj+1
(23)

        (⃗      ⃗       ⃗     )          (⃗       ⃗       ⃗    )
n(n-−1)-Pn,j −-2Pn-−1,j +-Pn−2,j-= n(n-−-1)-P2,j+1−-2P1,j+1+-P0,j+1 (j = 0,1,···,m − 2)
             hj2                             hj+12
(24)

The Bézier-curve is defined by the control points (⃗  ⃗    ⃗        )
 P0,P1,...Pn(n ≥ 2) and the Bernstein polynomials. On each spline one has n+1 control points.

       ∑︁n       (          )      [       ]
⃗rj(u) =    ⃗Pi,jBin u,uj ·uj+1   u ∈  uj ·uj+1  (j = 0,1,···,m − 1)
       i=0
PIC
Figure 4: Cubic Bézier Curve (Always defined by 4 control points), degree three

5.3.4 Example: Composite Bézier Curves

The four points A= (0,0),B= (1,0),C= (2,3) and D= (2,4) are to be interpolated (joined) by composed C1 Bernstein-Bézier splines: A and B are to be joined linearly (by a straight line), as well as C and D.
Compute the missing C1 Bernstein-Bézier spline of minimal degree between B and C.

To solve the exercise one can use Equation 23. Where the first spline and the last one have two control points, since it is a straight line n=1. The second spline has four conditions, since tow points must be met and two derivatives, since it must be C1 smooth. Due to that n=3 the degree is also 3 (also called cubic).

                                ((   )  (   ))    (      (   ))
  (         )    (         )       1      0                1
1· P1,0− P0,0 = 3· P1,1 −P0,1 = 1·   0  −   0   = 3 ·P1,1−   0
                                ((   )      )    ((   )  (   ))
3·(P3,1− P2,1)= 1·(P1,2 −P0,2)= 3·   2  − P2,1 = 1·   0  −   1
                                   3                0      0
        ( 4 )
⇒ P1,1=   3
        ( 0 )
          2
⇒ P2,1=   8
          3

⃗r(t) =︸P︷0︷,1︸·B03(t)+ P1,1·B13(t)+ P2,1 ·B23(A) + P︸0︷︷,1︸ ·B33
       B                                    C
     ( 1 )           ( 4 )           ( 2 )          ( 2 )
    =      (1 −t)3+ 3   3  (1− t)2t + 3  8  (1− t)t2+      t3
       0              0                3              3

t ∈[0,1]

x-3y-5ABCDPP1;1;21;;====,,11PPPP0131,0,0,1,2 ==P0P,01,2

Figure 5: Exercise Overview
5.3.5 Properties

5.3.6 Casteljau recurrence

The Casteljau recurrence is a similar idea as the neville-aitken. With this Idea a point on the Bézier curve can be calculated as a linear combination of two points on a Bézier curve of a lower degree.

⃗r−→ −→   −→ (t)= (1 − t)·r⃗−→ −→      −→(t) + t ·⃗r−→ −→ −→(t)  t ∈ [0,1]
 P0,P1,...,Pn            P0,P1,...,Pn−1          P1,P2,...,Pn

  0                 ⃗
C 1:   ′       Pn(= Q0     )    (       )                                      ′
C  :  ⃗rP(1) =   n  ⃗Pn− ⃗Pn( −1 = m  ⃗Q1 −Q⃗0 )           (             )         = ⃗rQ(0)
C2 :  ⃗r′′P(1)=   n(n − 1) ⃗Pn −2⃗Pn −1+ ⃗Pn−2  =m(m   −1) ⃗Q2 − 2⃗Q1 + ⃗Q0          = ⃗r′Q′(0)
  k    (k)                          ( k⃗    )                        ( k⃗  )    (k)
C  :  ⃗rP (1)=  n(n − 1)...(n − k+ 1) ∆ Pn −k = m(m  − 1)...(m − k + 1) ∆ Q0   = ⃗rQ  (0)

Is −→
rj(t) = i=0nPi,jBi,n(t) t[0,1] with the functions Qj defined (degree: n ), the following formulas turn out:

C0 :  ⃗Pn,j−1 = ⃗P0,j = ⃗Qj
C1 :  ⃗r′  (1) =          ⃗Q  − ⃗P       = ⃗P  − ⃗Q                     = ⃗r′(0)
  2    j′′−1                j   n−1,j−1    1,j    j                      j′′
C  :  ⃗rj−1(1) =          ⃗Qj − 2⃗Pn−1,j−1 + ⃗Pn−2,j−1 =⃗P2,j − 2⃗P1,j + ⃗Qj = ⃗rj(0)
Ck :  ⃗r(k)(1) =          ∆k ⃗Pn−k,j−1 = ∆k⃗P0,j                       = ⃗r(k)(0)
       j−1                                                           j

5.3.7 Example

Let’s do the same example as in subsubsection 5.2.5. Where the following four points are given:

ꎧ||||        (   ⎷--) (    ⎷ -)      ⎫||||
|         π- -3-   2-π --3       ⎬
|| (︸0︷,︷0)︸,  3 ,2   ,  3 , 2   ,(π,0)||
||⎩Q0=P0,0  ︸--︷︷--︸                ||⎭
        Q1=P0,1=P3,0

Therefore Q0 = (0,0),Q1 =(  ⎷-)
 π3,-32-,… and hj=h=π3-. One now has to met the following requirements:

⃗Q1 − ⃗P2,0 = ⃗P1,1− ⃗Q1

⃗Q2 − ⃗P2,1 = ⃗P1,2− ⃗Q2
⃗     ⃗     ⃗    ⃗      ⃗     ⃗
Q1 − 2P2,0+ P1,0= P2,1− 2P1,1 +Q1
⃗Q2 − 2⃗P2,1+ ⃗P1,1= ⃗P2,2− 2⃗P1,2 +Q⃗2

⃗P  − 2⃗P   + ⃗Q  = −→0
 2,0     1,0   0   −→
⃗Q3− 2⃗P2,2+ ⃗P1,2 = 0

The solution of the linear system above gives us the following points:

ꎧ||||{    ⎷--} {      ⎷--} {      ⎷--} {     ⎷ --} {     ⎷ -}  {    ⎷ --} ⎫||||
|  π- -3-    2π-2--3     4π-3--3     5π-3--3     7π-2--3     8π---3  ⎬
||  9 , 5  ,  9 ,  5   ,  9 ,  5   ,  9 ,  5   ,  9 ,  5   ,  9 , 5   ||
||⎩︸---︷︷--︸ ︸----︷︷---︸ ︸----︷︷---︸ ︸----︷︷----︸                       ||⎭
    P1,0        P2,0         P1,1         P2,1

When we now calculate the first spline we get the same result as before.

       (    1πB1,3(t)+ 2πB2,3(t)+ 1πB3,3(t)   )
⃗r1(t)=   1⎷ 9-       29⎷ --       31⎷ --
       ( 5  3B1,3(t)+ 5  3 B2,3(t)+ 2  3 B3,3()t)
         π3(1− t)2t+ 2π3(1− t)t2+ 1πt3
     =   9⎷3-      2   92⎷ --       32  ⎷3-3
       ( 5 3)(1− t) t+ 5  33(1− t)t +  2 t
        x
     =  y
     (                )   (                  )
        ⎷- π3t⎷=-x           t = 3xπ⎷     ⎷ -
⇐ ⇒    3-3t− --3t3= y  ⇒    y = 9-3x − 27--3x
        5    10                 5π     10π3
(25)

For the second spline we get the following:

(     1          4         5          2             )
   ⎷ -3πB0,3(t)+⎷9πB1,3(t)+ 9 π⎷B2,3(t)+ 3πB3⎷,3(t)
  12  3B0,3(t)+ 35  3B1,3(t)+ 35  3 B2,3(t)+ 12  3 B3,3(t)
(26)

6 Linear Least-Squares approximation

6.1 Idea

Interpolation with the collocation methods often run into oscillation problems for (rather large) sets of measurement points. Furthermore, in most cases the measurements also contain some error points, which one does not want to represent in the graph. Due to that, an approximation (data points are not represented exactly any more) might be the preferred way to represent the data.

6.2 Linear Least-Squares

To find the best approximation, one must define what is a good and what is a bad approximation, which could be with the following basic functions:

⇒  Basicsof functions ⇒ min(max    |ri| )
                       ∑︁
                     ⇒    |ri|
                       ∑︁i...
                     ⇒    r2i
                        i...

Note: Error = residuals norm of residuals
Mathematically, the minimization of the squared errors is the easiest, therefore this one is most commonly used (least square approximation) PIC
Figure 6: Data approximated by an error curve

The approximation function can be described with a set of basis functions, which we name here g0,g1,…,gm={  }
 gjj=0,…,m. Note the basis functions are sometimes also called monomials. Furthermore we define the following

With those variables one can create a equation in matrix notation form, as can be seen in Equation 27:

           Designmatrix G
︷⎛--------------︸︸--------------︷⎞           ⎛     ⎞
  g0 (x0)  g1(x0)  ...  gm (x0)   ⎛     ⎞     y0
⎜⎜    ..       ..     ...     ..    ⎟⎟    a0     ⎜⎜  ..  ⎟⎟
⎜⎜    .       .            .    ⎟⎟ ·⎜⎝   ... ⎟⎠ = ⎜⎜  .  ⎟⎟⇔    G ·a = y
⎝    ...       ...     ...     ...    ⎠           ⎝  ...  ⎠
  g  (x  ) g  (x )  ...  g  (x )      am       y
   0   N    1  N        m   N                 N
(27)

Since Equation 27 is normally overdetermined (mN) The error/residuals can be calculated with Equation 28 and the squared sum S of residuals with Equation 29. The goal is now to minimize S from Equation 29. This can be done with Equation 30, which is not derived in this post (for more information, search after orthogonal projection).

        ∑︁m
ri = yi−   ajgj (xi)  (i = 0,...,N)
        j=0
(28)

         ⎛                ⎞2

      ∑︁N ⎜⎜     m∑︁          ⎟⎟   ∑︁N
︸S︷︷︸=    ⎜⎜yi −   aj gj(xi)⎟⎟ =    r2i ⇒ min!
Error  i=0⎝    j=0--  ---- ⎠   i=0
              ︸   M︷o︷del  ︸
(29)

6.2.1 Thinking hint

Lets assume one has the following points: {1,1},{2,2} and one wants to approximate those points by a polynomial of degree zero (m= 0) g0(x) = 1. Then one can write the term inside the square brackets as the following:

︷-y︸︸ ︷ ︷-G︸︸ ︷  a
[1  ]  [1  ] ︷[-︸︸ ︷]
     −       · a
  2      1

But as one can see one can not take the square of this term, therefore one has to multiply on both left sides with GT which has no effect on the end result, since one is only interested in the minimum.

   GT    ︷-y︸︸ ︷    GT    ︷-G︸︸ ︷  a
︷[--︸︸--︷] [1  ]  ︷[--︸︸--︷] [1  ] ︷[-︸︸ ︷]
  1  1  ·     −   1  1  ·     ·  a
           2               1

When one squares the term above and says that GT·y and GT· G belong to each other, (are not separable) and then takes furthermore the derivative, one gets the following result: 2 ·(a ·(GT ·G) − (GT ·y))· (GT· G). When one sets this term to zero since one is interested in the minimum and solves it after a one gets the result from Equation 30.

6.2.2 Normal equations
   GTG     ·a =GTy    ⇒    a = (GTG)  −1 GTy
   ︸ ︷︷-︸
Normalmatrix
(30)

     T                             T
   G︸︷︷︸   ·   ︸G︷︷︸   ·  ︸a︷︷︸  =  ︸G︷︷︸  · ︸y︷︷︸
(m︸+1)×︷(︷N+1)︸ (N+1)×(m+1) (m+1)×1  (m+1)×1 (N+1)×1
(m+1)×(m+1)

6.3 Singular-value decomposition (SVD)

A singular-value decomposition is one of the most widely used matrix operations in applied linear algebra.

6.3.1 Idea

Every Matrix G with the dimensions (N+ 1) × (m+ 1) can be decomposed as the triple product UDV T whereas U is an orthogonal (N +1) × (N +1) -matrix, D is a (N +1) × (m +1) diagonal matrix and V again is orthogonal with dimensions (m +1) × (m +1). When a matrix is orthogonal, the following applies: QTQ=QQT =I and QT =Q1. Due to that nice property, Equation 27 can be calculated according to Equation 32.

                 ⌈                     ⎤
                 ⎢ d00   0   ...   0   ⎥
                 ⎢⎢  0   d11  ...   0   ⎥⎥
                 ⎢   ...    ...  ...   0   ⎥
          T      ⎢⎢                     ⎥⎥    T
G = U DV    = U ·⎢⎢  0    ...   0   dmm  ⎥⎥ ·V
                 ⎢⎢  0    0   ...   0   ⎥⎥
                 ⎢⌊   ...    ...  ...   0   ⎥⎦

                    0    0   ...   0
(31)

Ga|=y| becomes UDVTa|=y|. Which results in Equation 32

       −1  T
a|= VD   U  ·y|
(32)

6.3.2 Uniform arguments and orthogonal polynomials

With uniform arguments xixj= (ji)h for all i,j{x0...xN}={x0+ t·h}t=0…N and orthogonal polynomials GGT can be diagonalized and therefore the equation can be easier solved. An example can be found in subsubsection 6.3.6

         ∑︁k     i( k )( k + i) t(i)
pk,N(t)=    (−1)   i      i    -(i)
         i=0                   N
             k∑︁      ( k )( k + i ) t(t− 1)(t− 2)...(t− i+ 1)
       = 1 +   (−1)i              --------------------------- (k = 1,...,N)
            i=1       i      i    N(N − 1)(N −2)...(N− i+ 1)
(33)

with t=x−x0
 h (   )
  k
  i=--k!--
i!(k−i)! =nCr(k,i)

pk,N can now with gk be put in the design matrix and the product GTG will be a (m+ 1) × (m+ 1) Diagonal matrix. Afterwards a can be calculated with the knows formula GTGa=GTy.

6.3.3 Calculation of the first terms for orthogonal polynomials:

From Equation 33 one knows that N(i) and t(i) are the following:

N(i)= (N − 1)(N −2)(N − 2)···(N − i+ 1)
      ︸-︷︷-︸
      ︸-N0-︷︷----︸
           N1
(34)

 (i)
t   = (t︸-−︷︷0)︸(t− 1)(t− 2)···(t− i+ 1)
      ︸-t1-︷︷----︸
           t1
(35)

        ∑︁0      (   )(      )     0
p0,4(t)=    (−1)0   0    0+ 0   1=-t--= 1-
        i=0       0      0    1= 40   --

p1,4(t)= ∑︁1(−1)i (1 )(  1+ i )ti-
         i=0        i     i    Ni
              ( 1 )( 1+ 0 ) 1= t(0)
       = (−1)0               -------
              ︸-0︷︷-︸︸--︷0︷--︸ 1= 40
               =1     =1
              (   )(      )    (1)
       + (−1)1   1    1+ 1  t=-t--
               -1-  --1 --- 4= 41
              ︸︷=︷1 ︸︸  ︷=︷2  ︸
            t
       = 1− --
         ---2-

        ∑︁2     i( 2 )( 2+ i ) ti
p2,4(t)=    (−1)    i     i    --i
        i=0  (   )(       )  N
            0  2    2 + 0  1=-t(0)-
      = (−1)   0      0    1 = 40
             ︸-︷︷ ︸︸--︷︷--︸
               =1     =1
            1( 2 )( 2 + 1 ) t= t(1)
      + (−1)   1      1    --------
             ︸-︷︷ ︸︸-︷︷--︸ 4= 4 −0
               =2    =3
             =--2!-=1
              2!(︷(2 ︸−2︸)!︷) (      )
            2   2      2+ 2   t·(t−-1)=t2-−t-
      + (−1)    2        2     4·(4− 1)= 12
                     ︸--︷︷--︸
                     = 2!4(4!−2)!=6

      = 1−  3·t+ 1-·t2− 1-·t
        ----2----2------2---

6.3.4 Exercise one, least square parabola

Compute a linear least-squares approximating parabola for the "second window" of five consecutive points (starting with x = 2) in the data.

{{x,y}}={{1,1.04},{2,1.37},{3,1.70},{4,2.00},{5,2.26}
        {6,2.42},{7,2.70},{8,2.78},{9,3.00},{10,3.14}}

Lets also say the following:

Note: Normally: mN (The degree is smaller than the number of points)

Write down the design matrix and the system of normal equations.

    ꎧ     |                      ⎫
    |     |  g0     g1      g2   |   ꎧ    |1  x  x2  ⎫
    |||| ----|----------------------||||   |||| ---|----------||||
    ||||  x0 |g0(x0)  g1(x0) g2 (x0)|||⎬   ||||  2 |1  2   4  |||⎬
       x1 |g0(x1)  g1(x1) g2 (x1)       3 |1  3   9
G = ||  x  |  ...       ...       ...   ||  =||  4 |1  4  16  ||
    ||||   2 |                      ||||   ||||  5 |1  5  25  ||||
    ||⎩  x3 |                      ||⎭   |⎩    |          |⎭
       x4 |                             6 |1  6  36
D︷⎛esign︸m︸atrixG︷⎞          ⎛      ⎞
  1  2   4   ⎛     ⎞     1.37
⎜⎜ 1  3   9 ⎟⎟    a0    ⎜⎜  1.70 ⎟⎟
⎜⎜ 1  4  16 ⎟⎟ ·⎝  a  ⎠= ⎜⎜  2.00 ⎟⎟
⎜          ⎟     1    ⎜      ⎟
⎝ 1  5  25 ⎠    a2    ⎝  2.26 ⎠
  1  6  36               2.42
(36)

With Equation 30 we get the system of normal equations as one can see in Equation 37:

                      ︷⌈--︸y︸--⎤︷
︷--------G︸T︸---------︷   1.37
⌈ 1  1   1   1   1  ⎤ ⎢  1.7 ⎥   ⌈  9.75  ⎤
⌊                   ⎦ ⎢⎢      ⎥⎥   ⌊       ⎦
  2  3   4   5   6   ·⎢⎢   2  ⎥⎥ =   41.66
  4  9  16  25  36    ⌊ 2.26 ⎦     196.4
                        2.42

                       -----G -----
         GTG          ︷⌈     ︸︸    ︷⎤
︷⌈---------︸︸--------︷⎤ ⎢  1  2  4  ⎥   ⌈                ⎤
  1  1   1   1   1    ⎢  1  3  9  ⎥      5   20    90
⌊ 2  3   4   5   6  ⎦ ·⎢⎢  1  4  16 ⎥⎥ = ⌊ 20   90   440  ⎦
  4  9   16  25  36   ⎢⌊  1  5  25 ⎥⎦     90  440  2274
                         1  6  36
        GTG
︷⎛--------︸︸-------⎞︷ ⎛    ⎞   ⎛       ⎞
   5   20    90       a0       9.75
⎝  20  90    440 ⎠ ·⎝ a1 ⎠ = ⎝ 41.66 ⎠

   90  440  2274      a2       196.4
(37)

Solve the linear system

a0 = 0.506;a1= 0.483143;a2 = −0.0271429
y = a0 ·1 + a1·x + a2 ·x2

When one wants to increase the stability of the matrix one can make a statistical normalization.

Compute the output y and the derivative (!) of the approximation at the central coordinate (x = 4)

y(4)= 2.00429;  y ′(4)= a1 + 2a2 ·4= 0.266-

6.3.5 Excercise three, Savitzky Golay filter

Apply the filter formulas developed in the exercise before for the data to compute approximately y for x= 3,…,8

---------------------------------------------------------------
|x  |  1     2    3     4     5     6     7    8     9    10   |
|-k-|----------------------------------------------------------|
-yk--1.04--1.37--1.70--2.00--2.26--2.42--2.70--2.78--3.00--3.14--|

k = 2⇒  x =3 :a0= y2 − 3-∆4y0 = 1.7c− -3-0.02 = 1.6983
                       35             35
                       3-- 4        3--
k = 3⇒  x =4 :a0= y3 − 35∆  y1= 2 − 35(−0.05)= 2.0043
                       3   4           3
k = 4⇒  x =5 :a0= y4 − --∆  y2= 2.26− ---(0.28) =2.236
                       35             35
k = 5⇒  x =6 :a0= y5 − 3-∆4y3 = 2.42− -3-(−0.54)= 2.4663
                       35             35
                       3-- 4         -3-
k = 6⇒  x =7 :a0= y6 − 35∆  y4= 2.7− 35 (0.66)= 2.6434
                       3   4           3
k = 7⇒  x =8 :a0= y7 − --∆  y5= 2.78− ---(−0.56)= 2.828
                       35             35

6.3.6 Exercise four, orthogonal polynomials

Solve subsubsection 6.3.4 again by using the orthogonal polynomials {      }
 pk,N(t)k=0,…,2
From Equation 33 and subsubsection 6.3.3 one knows the three basis functions:

P0,4(t)= 1= g0;  P1,4(t)= 1 − 1t = g1(t); P2,4(t)= 1− 2 ·t + 1t2= g2(t)
                            2                            2

Now we fist have to find a transformation. The transformation can be written in the follwoing way:

t = x-−-2= x − 2∈ {0,···,4}
     1

    ꎧ    |                      ⎫   ꎧ    |                       ⎫
    |||----|--g0-----g1------g2---|||   || ---|1--1−-12t--1−-2-·t-+-12t2-||
    ||| t0 |g0 (t0)  g1(t0) g2 (t0) |||   |||  0 |1    1         1       |||
    ||| t1 |g0 (t1)  g1(t1) g2 (t1) ||⎬   |||    |     1          1      ||⎬
G =      |   .      .       .     =    1 |1    2         −2
    ||| t2 |   ..      ..       ..   |||   |||  2 |1    0         −1      |||
    |||| t3 |                      ||||   ||||  3 |1   −12        −12      ||||
    |⎩ t  |                      |⎭   ⎩  4 |1   −1         1       ⎭
       4 |
︷⎛Design︸m︸atrixG⎞︷         ⎛       ⎞
  1   1   1     ⎛    ⎞     1.37
⎜⎜ 1   12   −12 ⎟⎟    a0    ⎜⎜  1.70  ⎟⎟
⎜⎜ 1   0   −1 ⎟⎟ ·⎝ a  ⎠ =⎜⎜  2.00  ⎟⎟
⎜      1   1 ⎟     1    ⎜       ⎟
⎝ 1  − 2  −2 ⎠    a2    ⎝  2.26  ⎠
  1  −1   1                2.42
(38)

With Equation 30 we get the system of normal equations as one can see in Equation 39:

                         ︷⌈---y︸︸--︷⎤
︷----------G︸T︸----------︷    1.37
⌈ 1   1    1   1    1  ⎤ ⎢  1.7  ⎥  ⌈  9.75  ⎤
⌊     1         1      ⎦ ⎢⎢       ⎥⎥  ⌊        ⎦
  1   21   0   −21  −1    ·⎢⎢   2   ⎥⎥=    −1.33
  1  − 2  −1   −2   1    ⌊  2.26  ⎦     −0.19
                            2.42

                          ------G ------
          GTG            ︷⌈      ︸︸     ︷⎤
︷⌈----------︸︸----------︷⎤ ⎢ 1   11    11 ⎥   ⌈         ⎤
  1   1   1    1    1    ⎢ 1   2   − 2 ⎥     5  0  0
⌊ 1   12   0   −12  −1  ⎦·⎢⎢ 1   0   −1  ⎥⎥ = ⌊ 0  52  0 ⎦
  1  −1-  −1  −1-   1    ⎢⌊ 1  −1-  −1- ⎥⎦     0  0  7
       2        2          1  −12   12             2
     GTG                   y
︷⎛----︸︸----︷⎞ ⎛     ⎞  ︷⎛---︸︸---︷⎞
   5  0  0     a0        9.75
⎝  0  5  0 ⎠·⎝ a1  ⎠= ⎝ −1.33  ⎠
      2  7
   0  0  2     a2       −0.19
(39)

When we now calculate (GTG)1 ·y one gets ⎛   1.95   ⎞
⎝         ⎠
   −0.05.3382
   − -7- The result is therefore:

y(x) =a0 ·1+ a1 ·p1,4(t)+ a2 ·p2,4(x)

y(x = 4)= y(t = 2)= a0 + a1·0+ a2(−1) = 2,00429
 ′         1 ′            ′         ′
y (x= 4) = -y (x= 2)= a1p 1,4(2)+ a2p2,4(2).
           1 (   )
         =a1  −1--+ (−2 + 2) =0.266
                2            ------

Which is the same as inparagraph 6.3.4.

6.3.7 Exercise five, singular value decomposition

Examine and compute a least-squares approximative quadratic parabola for the data

|x--|−2-|−1--|0-|1-|2-|
-y---0---1----2--3--1-|

with respect to the basis functions {       2   }
 1,−x2 ,x2-− 1 in the following sense:

Compute the design matrix G and the normal matrix. Hint: The normal matrix here is diagonal!

    ꎧ    |                       ⎫   ꎧ      |              ⎫
    ||----|--g0------g1------g2---||   |      |1  −x   x2− 1 |
    |||| x0 | g0(x0) g1 (x0) g2 (x0) ||||   |||| -----|----2---2-----||||
    ||| x  | g (x ) g  (x ) g  (x  )||⎬   |||  −2  |1   11     11   ||⎬
G =     1|  0. 1    1. 1    2. 1   =    −1  |1   2    −2
    ||| x2 |   ..       ..       ..   |||   |||   0  |1   0    −1   |||
    |||| x3 |                       ||||   ||||   1  |1  −1    −1   ||||
    |⎩    |                       |⎭   ⎩   2  |1  −21     21   ⎭
      x4 |                                  |
  DesignmatrixG
︷⎛-----︸︸------︷⎞          ⎛   ⎞
  1   1    1               0
⎜⎜ 1   1   −1- ⎟⎟ ⎛ a0 ⎞   ⎜⎜ 1 ⎟⎟
⎜⎜ 1   20   −12 ⎟⎟·⎝ a  ⎠ = ⎜⎜ 2 ⎟⎟
⎜      1    1 ⎟    1     ⎜   ⎟
⎝ 1  − 2  − 2 ⎠   a2     ⎝ 3 ⎠
  1  −1    1               1
(40)

                            y
            T            ︷⌈-︸︸-︷⎤
︷----------G︸︸----------︷   0
⌈ 1   1    1   1    1  ⎤ ⎢⎢ 1  ⎥⎥  ⌈  7  ⎤
⌊     1         1      ⎦ ⎢    ⎥  ⌊     ⎦
  1   21   0  − 21  −1   ·⎢⎢ 2  ⎥⎥=    −2
  1  − 2  −1  − 2   1    ⌊ 3  ⎦     −3
                           1

                         ︷------G︸︸-----︷
          GTG            ⌈             ⎤
︷⌈----------︸︸----------︷⎤ ⎢ 1   11    11 ⎥   ⌈         ⎤
  1   1   1    1    1    ⎢⎢ 1   2   − 2 ⎥⎥     5  0  0
⌊ 1   12   0   −12  −1  ⎦·⎢ 1   0   −1  ⎥ = ⌊ 0  52  0 ⎦
  1  −1-  −1  −1-   1    ⎢⌊ 1  −1-  −1- ⎥⎦     0  0  7
       2        2          1  −12   12             2
    GTG                  y
︷⎛----︸︸---⎞︷ ⎛    ⎞   ︷⎛-︸︸-⎞︷
  5  0  0      a0        7
⎝ 0  52  0 ⎠ ·⎝ a1 ⎠ = ⎝ −2 ⎠
  0  0  7      a2       −3
        2
(41)

Solve the system of normal equations and write down a formula for the approximating parabola.

⌈         |        ⎤
  5  0  0 |1  0  0
⌊ 0  52  0 |0  1  0 ⎦
  0  0  7 |0  0  1
        2 |

divide first row by factor 5.

⌈          |1        ⎤
   1  05  0 |5  0  0
⌊  0  2  0 |0  1  0  ⎦
   0  0  72 |0  0  1

Also divide other rows by its factor.

           |
⌈  1  0  0 |15  0  0  ⎤
⌊  0  1  0 |0  2  0  ⎦
           |   5   2
   0  0  1 |0  0   7

    T  −1
︷--(G-G︸)︸----︷ ︷-︸y︸--︷
⎛ 1  0  0 ⎞  ⎛ 7   ⎞  ⎛ a  ⎞
⎝ 5  2    ⎠  ⎝     ⎠  ⎝   0⎠
  0  5  02   ·  −2   =   a1
  0  0  7      −3       a2

a  = 7-,a  = −4-,a = −6--
  0  5   1    5  2    7
                x     x2
y = a0 ·1 + a1·− -+ a2 --− 1
                2     2  2
y = 7-·1+ −4-·−x--+ −6-(x-− 1)
    5       5   2    7  2
    7-  2-    6-  3- 2
y = 5 + 5 ·x + 7 − 7 ·x
    79   2     3
y = ---+ --·x− --·x2
    35   5     7

What are the dimensions of the unitary matrices U,V , as well as the diagonal matrix D, in the singular value decomposition G=UDVtr

What are the entries (singular values) in the matrix D from above The singular values are the square-root of the non zero eigenvalues of GT· G and therefore {⎷ --
  5;⎷ ----
  5/2;⎷ ----
  7/2}

Give three orthogonal basis polynomials (with respect to the data given) as formulas in the variable x. {           }
 1;−x-;x2− 1
     2 2 is orthogonal because GT· G is diagonal.

6.4 Chebyshev polynomials

6.4.1 Idea

Approximate a continuous polynomial by a Chebyshev polynomial.

6.4.2 Definition

Chebyshev Polynomials are defined as Tn(x) = cos(n arccos(x)) with (n= 0,1,…) and (1 x1). Due to that, most data points are at the edge. The first polynomials can be found in Equation 42.

T0 = 1                x0 =1 = T0
T1 = x                x1 =x = T1
                           1     1
T2 = 2x2− 1           x2 = -T2 + -T0
                           2     2
T  = 4x3− 3x          x3 = 1T  + 3T
 3                         4 3   4 1
       4    2          4   1-    1-   3-
T4 = 8x − 8x  + 1     x  = 8T4 + 2T2+ 8 T0
                           1      5      5
T5 = 16x5− 20x3 + 5x  x5 = --T5 + --T3 + -T1
                           16     16     8
(42)

Further polynomials can be calculated with the recursion formula Tn+1(x) = 2xTn(x) Tn1(n2) with the initial conditions T1(x) =x,T0(x) = 1.

6.4.3 Properties

6.4.4 Usage

For Chebychev we use not g but Tx as function.

    ︷---------------Design︸M︸atrix---------------︷
    ꎧ|     |T (x)  T (x)   T (x)    ···  T  (x) ⎫|
    |||| ----|-0------1--------22-----------m----||||   ⌈                                       ⎤
    |||  x0 |  1     x0    2·x02 −1             |||      T0(x0)= 1   T1(x0)= x0   ...  Tm (x0)
    |  x1 |  1     x1    2·x1 −1             ⎬   ⎢⎢  T0(x1)= 1   T1(x1)= x1   ...  Tm (x1) ⎥⎥
G = ||  x2 |  1     x2    2·x22 −1             || = ⎢⌊      ...            ...       ...     ...    ⎥⎦
    |||     |  ..      ..       ..                |||
    |||⎩  x3 |  .      .       .                |||⎭      T0(xN) = 1  T1(xN)= xN   ...  Tm (xN)  N ×m
      xN  |  1     xN    2·x2N − 1

The matrix GTG can be callculated accroding to Equation 43.

                            ꎧ
〈      〉   N∑︁                |     0        j ̸= k
 Tj ,Tk :=    Tj(xi)Tk(xi)=    (N + 1)/2 j = k̸= 0   (j,k =0,...,N)
           i=0               ⎩    N + 1   j = k= 0
        (         )
xi = cos -2i-+ 1-π   (i = 0,1,...,N)
         2(N + 1)
(43)

And results then in the follwing matrix

                 ⌈                       ⎤
                    N + 1   0   ···  0
 T               ⎢⎢   0     N+21- ···  0   ⎥⎥
G  G     ︸=︷︷︸    ⎢⌊    ..     ..   ..    ..  ⎥⎦
    When Chebyshev     .     .     .   .
                     0      0   ... N+12-  m ×m

                        ⌈    1           1      1           1           1            1   ⎤
                           N+1-T0(x0)= N+1-    N+1T0 (x1)= N+1-   ···   N+1-T0(xN)=  N+1-
( T  )−1 T              ⎢⎢ N2+1-T1(x0)= N2+1-x0  N2+1T1 (x1)= N2+1x1   ···  N2+1T1 (xN)= N+21-xN ⎥⎥
 G G    G      ︸=︷︷︸     ⎢⌊          ..                  ..           ..            ..         ⎥⎦
           WhenChebyshev        2   .               2  .            .       2   .
                              N+1Tm  (x0)          N+1Tm (x1)      ···      N+1Tm (xN)       m×N

Recipe Goal: y(t) =pN(t) (define polynomial in [a,b] ) with Chebyshev-polynomials of degree m. 1. transform y(t) on the standard interval [1,1] (affine Transformation) t=a+b−2a-(x+ 1). 2. describe y(x) with Tn terms 3. truncate y(x) on degree m : ym(x) 4. back transformation: x= 2t−a-
b−a1 5. insert Tn in ym see Equation 42 6. error estimation for truncate Method:
maxt∣∣y(t)− y  (t)∣∣
        m( removed part)



Example Approximation of y(t) =t3 with degree m= 2 for the interval (a,b) = (0,1)

1.
Transformation with t=x+21
      (     )
       x-+-1 3  1-( 3    2        )
y(x)=    2    = 8  x + 3x  + 3x + 1
2.
Expand with Tn see also Equation 42:
       1(T  (x)+ 3T (x)   T  (x) + T                )
y(x) = -- -3-------1---+ 3--2------0+ 3T1(x)+ T0(x)
       8        4             2
   1--       3--      15-       -5-
 = 32T3(x) + 16T2(x)+ 32 T1(x)+ 16 T0(x)
3.
shorten to degree m= 2 : y(x) 136T2(x) +1352T1(x) +516T0(x)
4.
back transformation with x= 2t1 : y(t) 3-
16T2(2t1) +15
32T1(2t1) +5-
16T0(2t1)
5.
Tn(2t1) substitution: y(t) -3
16(       2   )
2(2t −1) − 1+15-
32(2t1) +-5
16
6.
error estimation: maxt∣          ∣
∣ 132T3(2t − 1)∣

6.5 Continuous Chebyshev approximation

A weight function w(x) is often called the function inside the integral, in the case of the formula below w(x) =⎷--1-2
  1−x.

                                     ꎧ
〈     〉       ∫ 1             dx     |   0     j ̸= k
 Tj,Tk cont :=   Tj(x)Tk(x)⎷------2 = ⎩ π/2  j = k̸= 0
               −1            1 −x        π   j = k= 0
(44)

Equation 44 is true because the polynomial is orthogonal!

Example:Chebyshev continuous least-squares parabola on the interval [0,1] for y(t) =t3 First the interval [0,1] is transformed to [1,1] by x= 2t1 This is shown with ??

      x − a         x− 0
−1 + 2----- = −1+ 2 -----= 1− 2x
      b − a         1− 0

In our case our new x is called t. Therefore x= 2t1 t=x+1
 2 y(t) =(x+1)3
  8 Now we can use Equation 45 and get the following results:

     1 ∫ 1       dx      5
a0 = --   y(x) ⎷------= ---
     π  −1      1 − x2  16
     2 ∫ 1            dx     15
a1 = --   y(x)T1(x) ⎷-----2 =---
     π ∫−1           1 − x   32
     2-  1          --dx---- -3-
a2 = π  −1y(x)T2(x) ⎷1-−-x2 = 16
                                ꎧ  1∫1  y(x)
       ∑︁m                       | π- −1⎷1-−x2dx    j = 0
p(x) =    ajTj(x)   wobei   aj =⎩ -2∫1 y(⎷x)Tj(x)dx   j > 0
       j=0                        π  −1   1−x2
(45)

6.6 Continuous Least-Square Legendre approximation

Legendre Polynomials: The Legendre polynomials are defined by the Rodriguez formula which can be seen in Equation 46

         1    dn  (     )n
Pn(x) = -n---·--n- x2− 1
        2 n!  dx
(46)

Where the first polynomials can be seen in Equation 47

Pn(x)= --1n--·-dnn-(x2− 1)2  P0(x)= 1  P1(x) = x  P2(x)= 1-(3x2− 1)
       2  n!  dx                                        2
P3(x)= 1-(5x3− 3x)  P4(x) = 1(35x4 − 30x2+ 3)  P5(x) = 1(63x5 − 70x3 + 15x)
       2                    8                          8
(47)

Continuous Legendre least-squares approximation
If y(x) is function on [1,1] which is absolutely square-integrable with respect to the weight function w(x) = 1 (x[1,1]) in the sense that 11|y(x)|2dx<∞, then the continuous square-sum of residuals in Equation 48

        (                )2
    ∫ 1        ∑︁m
S :=      y(x)−    ajPj(x)  dx
     −1        j=0
(48)

Is minimal when the coefficients a have the value given in Equation 49. This equation also describes the resulting polynomial.

      ∑︁m                          2j + 1 ∫ 1
p(x)=    ajPj (x)   whereas   aj = ------·   y(x)Pj(x)·dx   (j = 0,1,...,m)
      j=0                           2     −1
(49)

where m is the degree.

6.6.1 Legendre continuous least square parabola

Legendre continuous least-squares parabola on the interval t[0,1] for y(t) =t3.

First one has to do a transformation to the interval x[1,1]. This results in the following: x= 2t1 t=    x+1
     2, therefore y(x) =(x+1)3
  8. With Equation 49 one gets then the following:

     1∫ 1               1 ∫ 1          1
a0=  --   y︸(︷x︷)︸P︸0(︷x︷) ︸dx = --   y(x)dx  = --
     2 −1 (x+1)3  1       2  −1          4
           8
     3∫ 1               3∫ 1            9
a1=  --  y(x)P1(x)dx  = --   y(x)xdx = ---
     2∫−11               2∫−11     (  2  20)
a =  5-  y(x)P  (x)dx  = 5-   y(x) 3x--−1- dx = 1-
 2   2 −1      2        2 −1         2         4

6.7 Mulit-variate least-square

Note: the degree of a multi-variate polynomial can be identified by adding up the degrees of the variables in each of the terms. It does not matter that there are different variables. The largest number is the degree.

Residuals: ri=zif (xi) r i2 min!

              Design matrix
   ︷ꎧ------|-------︸︸-------------⎫︷  ꎧ      |        2         2 ⎫
   || -----|--g0-----g1------g2---||   || -----|1--y---y---···--xy--||
   ||||  x⃗0  |g0(x0)  g1(x0) g2 (x0)||||   ||||  x⃗  |1  y   y2        ...  ||||
   ||||  x⃗  |g (x )  g (x ) g  (x )|||⎬   ||||   0  |    0   02           |||⎬
        1 | 0. 1    1. 1    2. 1        x⃗1  |1  y1  y1
G =||  x⃗2  |  ..       ..       ..   || = ||  x⃗2  |...   ...   ...           ||
   ||||   ..  |                      ||||   ||||   .  |                    ||||
   |||   .  |                      |||   |||   ..  |                    |||
   ⎩  x⃗N  |                      ⎭   ⎩  x⃗N  |1  yN  y2           ⎭
                                                     N

Basis Functions d=dim=2

 m1  m2    md                                (   )
 ∑︁   ∑︁  ··· ∑︁  a        g  (x(1))g  (x(2))···g   x(d)
j1=0j2=0  jd=0 j1,j2,...,jd j1      j2         jd

Product

      {     2}
{︸1︷,︷x}︸x︸1,y︷,︷y-︸
 ’2’     ’3’

{1,y,y2,x,xy,xy2} = ’6’

Statistical norm

{ (        )j (        )k         }
   x(1)−-µ1-   x(2)−-µ2-
     σ1         σ2      |j,k ∈N0

6.7.1 Example one

Normally, a set of four 3d-points (x,y,z) is not contained in one single plane. But generally there is a plane coming close to the 3d-points in the sense of least-squares approximation.
The (x,y,z)-data in this problem is: A = (1,0,0), B = (0,1,0), C = (0,2,-1), D = (1,3,1)

(a)
Give a reasonable set of basis functions. Hint: A plane has total degree 1 (complete basis)

As a basis function, one can use:
{1,x,y}
(b)
Write down the design matrix according to a) and the normal equations
   ︷ꎧ------|---Design︸︸matrix--------⎫︷
   ||| -----|--g0-----g1------g2---|||   ꎧ        |        ⎫
   |||  x⃗0  |g0(⃗x0)  g1(⃗x0) g2 (⃗x0)|||   || -------|1--x--y-||
   ||||  x⃗  |g (⃗x )  g (⃗x ) g  (⃗x )|||⎬   ||||  (1,0) |1  1  0 |||⎬
G =     1 | 0. 1    1. 1    2. 1    =   (0,1) |1  0  1
   ||  x⃗2  |  ..       ..       ..   ||   ||        |        ||
   ||||   ..  |                      ||||   |||⎩  (0,2) |1  0  2 |||⎭
   |||   .  |                      |||      (1,3) |1  1  3
   ⎩  x⃗N  |                      ⎭
(c)
Solve the system of normal equations and give a functional formula for the approximating plane.

According to Equation 30 to following is true:    (  T   )−1  T
⃗a︸=--G--·G︷︷---G--·y︸normal equations
   ⌈   4 ⎤
      −5
⃗a= ⌊  1  ⎦
       1
       5

          4-     1-
f (x,y) = −5 + x+ 5 y

6.7.2 Example three

Express x3,x4 as linear combinations of the Chebyshev polynomials T0(x),T1(x),T2(x),T3(x),T4(x).

From Equation 42 one knows that T3(x) = 4x3 3x14T3(x) =x3 34x14T3(x) +34T1(x) =x3


From Equation 42 one knows that T4(x) = 8x48x2+1 1
8T4(x) =x4x2+1
8 1
8T4(x)+1
2T2(x) =x43
8  1
 8T4(x) +         1
         2T2(x) +                 3
                 8T0 =x4

6.7.3 Example six

Express x3 as linear combinations of the Legendre polynomials P0(x),P1(x),P2(x),P3(x).
From Equation 47 one knows that P3(x) =1
2(  3     )
 5x − 3x2
5P3(x) =x3 3
5x2
5P3(x) +3
5P1(x) =x3

6.7.4 Example seven

Compute continuously approximating least-squares lines for the model function y(t) =t2 (0 t1) by

(a)
Chebyshev approximation with the weight function w(x) = 1/⎷ ----2-
  1− x (1 <x< 1)

Firstly one has to bring it into the correct range. One can do that with ??
Map the interval [a,b] onto the interval [c,d]
         (     )
          d-−-c
f(t)= c+  b − a  (t −a)
(50)

f|(t)= −1 + (1-+-1)(t− 0)= −1 + 2t ⇒ x =−1 + 2t ⇒ t = 1(x+ 1)
︸︷︷x︸         1− 0                                    2

    1- 2  1-   1-
y = 4x  + 2x + 4

y = 1x2 + 1x + 1-
    4     2    4

1     1     1    1     1    3             3  1     3  1
--T2= --x2− --⇒  -T2 + -T1+ --T0⇒  line = -+ --x=  -+ --(2t −1)
8     4     8    8     2    8      -------8--2--- -8--2-------
(b)
Legendre approximation with the weight function w(x) = 1
(c)
Estimate the maximum approximating errors in ab) by the coefficients of the Chebyshev polynomial T2(x) and Legendre polynomial P2(x) , respectively

7 Differentials, Taylor formulas and Jacobian

7.1 Differential

7.1.1 Definition

The purpose of differential is to measure error propagation. (How much is y(dependent variable) wrong when x(independent variable) is wrong by a certain amount). The differential df is the linear amount of change between a variable and a function, as it can be seen in Equation 51. Whereby ∆f is the hole amount of change, not only the linear one (The difference between two points). For small dx on can say ∆f df and ∆f-
 fdf-(x0)
 f(x0).

∆f = f (x0+ h)− f (x0) ≈ df = f′(x0)dx = f′(x0)h =f ′(x0)∆x
(51)

7.2 Taylor

As one has seen before, ∆f df for small dx and one has used only the linear part. To further improve the approximation one could not only use the linear part (first derivative) but also the squared (second derivative) and so on. Therefore, a function at a certain point can be approximated by it’s derivates at this point, which is called Taylor series approximation, which can be seen in Equation 52 for the one dimensional case and in Equation 53 for the multidimensional case.

     1          1               1
∆f = --df (x0)+ --d2f (x0)+ ...+ --dnf (x0)+ Rn (x0,h)
     1!         2!              n!
(52)

The vector field of of the partial derivative is called gradient of f and is denoted with grad(f ) or f (x). Therefore ∆f f (x) ·h. Equation 53 shows the taylor series approximation for the multi-indices α={α1,...,αn} and |α|=α1 ++αn.

  (      )                       ∑︁N     |α|           ∑︁      (    )
f  ⃗x0+ ⃗h  = f (⃗x0)+ 1-⃗∇f (⃗x0)·⃗h +    1-∂--f-(x⃗0)-⃗hα+       R α ⃗x0,⃗h h⃗α
                   1!           |α|=2 α!  ∂x α       |α|=N+1
(53)

The remainder terms Rα(    )
x⃗0,⃗h are absolutely bounded by maxxS∣   α   ∣
∣∣-1∂-f(xα⃗)∣∣
 α! ∂x with |α|=N+ 1 and S=x0 +([−h1,h1]× [−h2,h2] × ···× [−hn,hn]) is a n-dimensional ’rectangle’ with center x0. (This idea can be used afterwards by the Jacobian matrix and the determinant)
The formulas for up to order four can be found in Equation 54

f(x,y) = f(0+ x,0+ y) ≈
        ∂f-     ∂f-    ∂2f- -1 2   ∂2f-- -1--      ∂2f- 1-  2
f(0,0)+ ∂x ·x + ∂y ·y+ ∂x2 ·2!x  + ∂x∂y ·1!1! ·xy + ∂y2 ·2! ·y +
   3             3                 3                3
+ ∂-f-·-1 ·x3 + -∂-f--·-1--·x2y + -∂-f--·-1--·xy2 + ∂-f-· 1-·y3+
  ∂x3  3!      ∂x2∂y  2!1!       ∂x∂y2  1!2!       ∂y3  3!
  ∂4f   1      ∂4f    1          ∂4f     1          ∂4f     1
+ ----·--x4 + ------·----·x3y + -------·----·x2y2+ ------ ·---·xy3+
  ∂x4  4!     ∂x3∂y  3!1!       ∂x2∂y2  2!2!       ∂x ∂y3  1!3!
  ∂4f   1   4
+ ---4 ·- ·y
  ∂y   4!
(54)

7.2.1 Example

The bivariate symmetric (!) function f (x,y) =ex2+y2
 2 has to be approximated by a bivariate Taylor polynomial of order 4 around (0,0) by

1.
evaluating and using Equation 53 for the partial derivatives.
              1           1
  1+ 0x + 0y −-x2 + 0xy − -y2 + 0x3 + 0x2y + 0xy2
              2           2
+ 0y3+ ---3---x4+ 0x3y + -1--x2y2+ 0xy3 + -3y4
       4!= 24            2·2              24
     1- 2  1- 2  3-- 4  1-2  2  -3- 4
=1 − 2 x − 2y  + 24x  + 4x y  + 24y
  -----------------------------------
2.
multiplying the univariate Taylor series for the exponential function,
               (                ) (                )
     −x2  −y2       x2-  x4--          y2-  y4--
f =e  2 ·e  2 ≈ 1 − 2  + 4·2 + ... ·1 − 2 + 4·2 + ... ...
(                      2           2
 by substituting u = −x--and  u= −y---,resp.)
                      2            2
        x2   y2  x4   1       y4
...= 1 − ---− --+ ---+ -x2y2 + ---+ ···)
    ----2----2----8---4--------8------
3.
substituting into the univariate Taylor series for the exponential function
            x2+ y2           x2+ y2   1(x2 + y2)2
Subst: u = −-------⇒ f ≈ 1 − -------+ ---------
              2                2      2    2
     x2   y2   1 4   1 2 2   1 4
= 1− -2-− 2--+ 8x +  4x y  + 8y
  -------------------------------
4.
What value do you expect for the error limit: limh∥→0
 lim

7.3 Jacobian matrix and determinant

In the Taylor series approximation for multi-indices, one has seen that ∆f f (x) ·h for small dx (see also Equation 53). When one writes all those gradients in one matrix, one gets the so-called Jacobian matrix. Which actually tells us the same as mentioned in subsubsection 7.1.1, but just for a multivariable problem. It tells us how y1,y2,… (dependent variable) changes when, x1,x2,… (independent variable) changes. Furthermore it has the nice property that the determinant of this matrix also describes how the volume changes when one changes a certain variable.Therefore the Matrix can be used to analyse the error propagation or make volume calculation in a different coordinate system.
Below one can find some definitions.

          (        )               (        )
∂⃗f(u,v)     ∂x(∂uu,v)       ∂f(u,v)     ∂x(u∂,vv)
--∂u----=   ∂y(u,v)  and  --∂v----=   ∂y(u,v)
              ∂u                       ∂v
(55)

                            (     )
     d︸y1dy2︷︷···dyn︸     = det Jf(⃗x) d︸x1dx2︷·︷··dxn︸
(56)

                      (    )
 ∆︸y1-∆y2︷︷···∆yn︸  ≈ det Jf(x⃗)  ∆︸x1∆x2︷︷···∆xn︸
(57)

In the special case that n = m the Jacobian matrix is a square matrix and thus has a determinant (called Jacobian determinant):

D (f ,f ,...,f )      (    )           ( )   √ -----------------
----1--2-----n- =det  Jf(⃗x)    or  det  Jf  = √√ det( JT (⃗x)Jf(⃗x) ).
D (x1,x2,...,xn)                             √      f︸--︷︷---︸
                                                  Diagonalmatrix
(58)

                1
T−1 :det(JT) = -------
              detJT−1
(59)

                                        ⌈                      ⎤
                         ⌈           ⎤     ∂f1(x)  ···  ∂f1(x)
   [                  ]     ∇Tf1(x)     ⎢⎢   ∂x1          ∂xn   ⎥⎥
J=   ∂f(x)  ··· ∂f(x)   =⎢⌊     ..     ⎥⎦= ⎢⎢     ..    ...     ..   ⎥⎥
      ∂x1        ∂xn         T .        ⎢     .            .   ⎥
                            ∇ fm(x)     ⌊  ∂fm(x)- ···  ∂fm(x)-⎦
                                            ∂x1          ∂xn

7.3.1 Estimating navigation error by inversion of Jacobian determinant

Lets assume one measures an angle (α,γ) and want the position (x,y). Let’s call the transformation from (α,γ) to (x,y) T and from (x,y) to (α,γ) T1

B = (−40,−40) = (x(1,y1))
C = (−40,2140) =  x2,y2
A = (3040,1050) =(x ,y )
                   3  3
a = 2180,c = 3267.19
P = (x,y)
        ◦         ◦
∆ α= 0.1 ,∆γ = 0.1

To solve this problem one can follow the following approach:

1.
Description of the problem
2.
Transformation (equalities) from cosine-theorems (generalized Pythagoreans)
3.
Implicit differentiation
4.
Computation of the Jacobian matrix JT1(x,y)
5.
Computation of the Jacobian determinant ∣        ∣
∣JT−1(x,y)∣
6.
Elimination of angles
7.
Computing ∣∣       ∣∣
 JT(α,γ) expressed in position coordinates x,y

7.3.2 Example three

The formulas x=r cos(φ), y=r sin(φ) (0 <r, 0 φ< 2π) define the coordinate transform T from polar coordinates (r,ϕ) to Cartesian coordinates (x,y) in the punctured plane R2' {(0,0)}.
Compute the Jacobian matrix JT(r,φ) and its Jacobian (determinant) detJT(r,φ) as well as the Jacobians (determinants) detJT(x,y),detJT1(r,φ) and detJT1(x,y)

First of all one has to write down the transformations

x= r ·cosϕ
y= r ·sin ϕ

Once one has done that one can calculate the Jacobian Matrix:

   ⌈  ∂x   ∂x ⎤
   ⎢  --- --- ⎥
J= ⎢  ∂r  ∂ ϕ ⎥
   ⌊  ∂y- -∂y ⎦
      ∂r  ∂ ϕ
   [cos  ϕ  −r ·sin ϕ ]
 =
   --sinϕ----r·cosϕ----

and afterwards the determinant

detJT(r,φ)= r ·cosϕ2 + r ·sin ϕ2
               (    2       2)
          = r ·cos ϕ + sinϕ
          = r
            --

when one wants to express r in x and y one knows that r=√︁ -2---2-
  x + y therefore

detJT(x,y)= √︃x2--+ y2
---------------------

and

                     1
         detJT(r,φ) = --
                     r-
                  1
detJT−1(x,y) = √︁--------
              --x2+-y2--
              ----------

7.3.3 Example one

Elliptical coordinates (σ,τ) (σ> 1,1 <τ< 1) for a = 1 are connected to Cartesian coordinates (x, y) through the transforming formulas:

x =σ τ
   √︁  -2---√︁ -----2
y =   σ − 1  1 −τ

Compute the Jacobian matrix

J= J(σ, τ)

   ⌈          ⎤
      ∂x-  ∂x-
   ⎢  ∂σ   ∂τ ⎥
J= ⌊  ∂y-  ∂y-⎦
      ∂σ   ∂τ
   [                                               ]
 =     (     ) τ1   ⎷ ------    (     )σ1   ⎷ ------
     12  σ2− 1 −2 2σ  1− τ2  −12 1− τ2 −2 2τ  σ2− 1
   [                   ]
       ⎷τ---    ⎷σ----
 =   σ⎷-12−τ2  −τ⎷-σ2−12-
   ----σ-−1------1−τ----

Express the Jacobian determinant

            ⎷ ------     ⎷------
          τ2  σ2 − 1  σ2  1 − τ2
det(J) = −-⎷------2-− -⎷--2-----
             1( −τ  )    (σ  −1 )
         −τ2-σ2-−-1-−-σ2-1-−τ2--
       =     ⎷1-−-τ2⎷ σ2−-1-
              2   2
       = ⎷---τ-−⎷σ------
         --1−-τ2--σ2−-1-
         ---------------

Express the Jacobian determinant in x, y, since the denumerator is exactly y, we know it already. Furthermore the following is true: √︃( ----------)-------
   1 + x2+ y2 2− 4x2 =σ2 τ2

          √︃(-----------)------
         −   1+ x2 + y2 2− 4x2
det(J)=  ----------------------
        -----------y----------

8 Ordinary differential equations

8.1 Definition

An ODE is a differential equation where it’s derivatives belong to only one variable. Furthermore it is possible that an ODE can not be solved explicitly, due to that in this chapter it will be investigated how they can be solved numerically.
A first order differential value problem can be expressed like the one in Equation 60

 ′
y (x)= f(x,y(x))with initialcondition y(x0)= y0
(60)

8.2 Explicit methods

One way to solve the problem numerically is by tailor series approximation, as it can be seen in Equation 61. (Remember that the tailor series is evaluated from one single point). The idea is that one creates a tailor series approximation up to order p at the starting point and moves then with a step size h to the new position one got from the tailor series approximation and does there the same until one reaches the destination.

                       ′        ′′         ′′′          (4)                (p)          (p+1)
y(x + h)  =   y(x) + y-(x)h +  y-(x)h2 +  y--(x)h3 +  y--(x)h4 +  ··· + y---(x)hp +  y----(ξ)hp+1
                       1!        2!         3!          4!               p!          (p + 1)!
                                                                                    ︸----︷︷-----︸
                                                                                     remainingterm
(61)

The calculations up to order three (p=3) are given in Equation 62:

y(x+ h) =y(x) + f(x,y(x))h+
            (      1!                         )
         -1  ∂f(x,y(x))-  ∂f-(x,y(x))           2
         2!      ∂x    1+     ∂y     f(x,y(x)) h +
            ( 2               2                     2                     (           )2                                )
         -1  ∂-f(x,y(x))1+ 2 ∂-f(x,y(x))f(x,y(x))+ ∂-f-(x,y(x))f (x,y(x))2 +  ∂f(x,y(x))- f(x,y(x))+ ∂f-(x,y(x))∂f(x,y(x))-h3 + ...+
         3!      ∂x2            ∂x∂y                   ∂y2                     ∂y                      ∂x         ∂y
          1                1              1
         --y(4)(x)h4 + ...+ --y(p)(x)hp + ------y(p+1)(ξ)hp+1
         4!               p!           ︸(p-+-1)!-︷︷--------︸
                                            remainingterm
(62)

8.2.1 Euler method

The Euler method is a special case of the explicit methods where p= 1 and h=const. The Formulas to calculate it can be found in Equation 63.

                         y0 = y(x0)
                            (     )            ′
       y (x0 + h)≈ y1= y0 + f(x0,y0)h ≈ y(x0)+ y (x0)h
       y (x1 + h)≈ y2= y1 + f x1,y1 h ≈ y(x1)+ y′(x1)h
                              .
                              ..
                          (         )               ′
y (xn−1+ h) ≈yn = yn−1 + f xn−1,yn−1 h ≈ y (xn −1)+ y (xn −1)h
(63)

8.2.2 Error Calculation

8.2.3 Example

Solve the initial value problem y=xy1/3 with y(1) = 1 numerically by the method of Taylor with order p = 4 and fixed step-size h = 0.1 for the x-values 1.1 and 1.2 (two steps). All final (!) results should be rounded to the 10th digit. Furthermore, compute the local error (slope) as well as the global error for the two steps. Note that the exact solution of the equation is given by Equation 67.

    ( 2    )3/2
y =  x--+ 2
       3
(67)


To find out x at 1.1 one firstly needs to calculate the it’s tailor series approximation at the starting point.

x =1,y = 1                         y0= 1
y = xy1/3 = 1
 1
y = y1/3+ 1-x2y−1/3= 4-
 2        3          3
       −1/3  1-3  −1  8-
y3= xy     − 9x y   = 9
            2        1          4
y4= y −1/3−  -x2y−1 + -x4y −5/3 = --
            3        9          9

With Equation 61 one can then write down the following:

                 4      8     4
y(1.1)= 1+ -1h + 3-h2+ -9h3 + 9-= 1.1068166666667
           1!    2!    3!     4!  -----------------

Then one does the same at the new location on got from the previous result.

x = 1.1,                                  y = 1.1068166666667
                 1/3     1/3   1 2 −1/3       −1/3  1  3 −1
y(1.2)=   y+ x-·y---h + y---+-3x-y----h2 + xy----−-9x-y---h3+
                1!            2!                 3!
                    y−1/3− 2x2y −1+ 1x4y −5/3
                    -------3--------9--------= 1.227872941753--
                               4!              ----------------

The global error is defined by Equation 64 and therefore has the following result:

     ∣        ∣       {∣         ∣∣         ∣ ∣         ∣}
max  ∣yi −y (xi)∣= max   ∣y0 −y (x0)∣,∣y1− y (x1)∣,∣y2− y(x2)∣ =
0≤i≤2{       ∣    (       )3/2∣ ∣    (       )3/2∣}
            ∣∣     1.12+-2   ∣∣ ∣∣     1.22+-2    ∣∣               −7
max   |1− 1|,∣∣y1 −     3      ∣∣,∣∣y2−     3       ∣∣ = 1.14734-×-10--

The local error is defined by Equation 66 and therefore has the following result:

n = 0                      (                                    )
         y (x0 + h)− y(x0)   y ′(x0)   y′′(x0) 1       y(4)(x0)  4−1
τh(x0):= ----------------−  ------ + ------h  + ···+-------h     =
              ( h′      ′′     1!     (42)!      )       4!
y(1.1)−-y(1)   y-(1)  y-(1)  1      y--(1)  4−1
     h      −   1!  +   2! h  + ···+   4!  h     =
(  2  )3/2  ( 2  )3/2
 1.13+2    −  1+32-     (    2      4       1     )
--------------------−  1 + -0.1+ --0.12+ ---0.13 = −6.035828648-×-10−7-
         0.1                3     27      54

and for n=1 one has the following result:

                           (  ′       ′′              (4)        )
τ  (x1) := y-(x1-+-h)−-y(x1)−  y--(x1) + y-(x1)h1 + ···+ y-(x1)h4 −1 =
 h              h             1!       2!              4!
     y(1.2)− y(1.1)  (y′(1.1)   y′′(1.1)         y(4)(1.1)    )
     --------------−  -------+ ------h1 + ···+ --------h4−1  =
        (  h  )     (   1!)      2!              4!
         1.22+2 3/2   1.12+2 3/2
        ---3------−----3------                          −7
                 0.1          − (...)= −5.2265445216-×-10---

8.3 Explicit Runge-Kutta Methods

Since the calculation of the tailor approximation series is quite tedious. One came up with another method, which is recursive and results in the same as the tailor series approximation (big advantage is that no computations of derivatives up to order p are needed). In Equation 68 one can find the Common Runge-Kutta method of order 4. As one can see there one has seven variables, when one now sets this equal with the tailor series approximation all those variables are defined in the end as it can be seen in Equation 69.

k  = f(x,y)
  1    (                )
k2 = f x + mh,y + mhk1
k3 = f (x + nh,y + nhk2)
       (               )
k4 = f x + ph,y + phk3
y(x + h)≈ y(x) + ahk1 + bhk2 + chk3 + dhk4
(68)

-Name--der-Lösungen---#Stages(s)--Lösungen---------------------------
 Heun                 2           a= b = 1,m = 1(n = p = c= d = 0)
 ExplicitMidpoint      2           a= 0,b 2= 1,m = 1(n = p = c= d = 0)
                                          1      2       1        1
 Classic Runge -Kutta  4           m = n = 2,p = 1,a = d = 6,b = c= 3
(69)

For the classic runge-kutta method one can also write Equation 70 which results in Equation 71.

k1 = hf(x(,y)            )
k2 = hf  x+ mh,y  + mk1
        (             )
k3 = hf (x+ nh,y + nk2)
k4 = hf  x+ ph,y + pk3
y(x + h)≈ y(x)+ ak  + bk + ck  + dk
                   1    2     3    4
(70)

k1 = hf(x(,y)           )
            1      1
k2 = hf x + 2h,y + 2-k1
        (              )
k  = hf x + 1h,y + 1-k
 3          2      2  2
k  = hf (x + h,y+ k )
 4                3
y(x + h) ≈ y(x)+ 1-(k + 2k + 2k + k )
                6   1    2    3    4
(71)

The errors are calculated nearly the same way as in the previous section. Equation 74.

8.3.1 Example

Solve the initial value problem y=xy1/3 y(1) = 1 numerically by the classical Runge-Kutta method of order p = 4 and fixed step-size h = 0.1 for the x-values 1.1 und 1.2 (two steps). All final (!) results should be rounded to the 10th digit. The exact solution of the equation is y=(x2+2)
 --3-3/2.
From Equation 71 one knows that:

k1 = hf (x,y) = hxy1/3 =0.1·1 ·11/3 = 0.1
        (              )
k2 = hf  x+ 1-h,y+ 1-k1 = 0.1·(1+ 1-·0.1)·(1+ 1-·0.1)1/3= 0.10672161746556
        (   2      2   )          2          2
            1-     1-             1-         1-                   1/3
k3 = hf  x+ 2 h,y+ 2 k2 = 0.1·(1+ 2 ·0.1)·(1+ 2 ·0.10672161746556)    = 0.10683535998891
        (           )                                    1/3
k4 = hf  x+ h,y + k3 = 0.1 ·(1 + 0.1)·(1+ 0.10683535998891)    = 0.11378552740653
                1
y(x + h)≈ y(x)+ --(k1+ 2k2+ 2k3 + k4)= 1.1068165803859---
                6

                  1/3                          1/3
k1= hf ((x,y)= hxy    =)0.1·1.1·1.1068165803859    = 0.11378488387236
            1-     1-             1-           1-                  1/3
k2= hf  x + 2h,y + 2k1  = 0.1 ·(1 + 2 ·0.1)·(1.1 + 2 ·0.11378488387236)  = 0.12096116868214
       (    1      1  )           1            1
k3= hf  x + -h,y + -k2  = 0.1 ·(1 + -·0.1)·(1.1 + -·0.12096116868214)1/3 = 0.12108536369591
       (    2      2)              2            2
k4= hf  x + h,y+ k3 = 0.1·(1.1+ 0.1)·(1.1068165803859  + 0.12108536369591)1/3 = 0.1284998068981

y(x+ h) ≈ y(x)+ 1-(k1 + 2k2+ 2k3+ k4)= 1.2278795396403--
                6                     -----------------

As one can see, the result is nearly the same as in the previous Example. The local slope error can be calculated according to Equation 74.

         y(x  + h) −y (x ) (˜k  + 2k˜ + 2˜k + ˜k )
τh(x0):= ---0----------0-−  -1-----2----3---4- =
(     )    (    h)                  6h
 1.12+2 3/2   12+2 3/2  (                 )
---3------−---3-----   ˜k1-+ 2k˜2-+-2˜k3+-˜k4                    −7
        0.1         −         0.6         = 2.5922467994--×10----

         y (x1 + h)− y(x1)  (k˜1 + 2˜k2+ 2˜k3 +k˜4 )
τh (x1) := ----------------−  ------------------ =
(     )     (   h )                 6h
 1.22+2 3/2−  1.12+2 3/2   (˜    ˜     ˜   ˜ )
---3-----------3------−  k1+-2k2+-2k3-+-k4-= 2.7090772847 × 10−7
         0.1                    0.6          --------------------

The local error can be calculated according to Equation 73 and results in the following:

h ·τh(x0)= 2.5922467994-×-10−8-
           --------------------

                            −8
h ·τh(x1)= 2.7090772847-×-10---

8.4 Butcher tableau

The Butcher tableau is mainly a mnemonic device (Gedächtnissstütze) to remember the coefficients. The tableau must fullfill the condition listed in Equation 75

    ∑︁s      i∑︁−1
ci =   aij =   aij(i = 2,...,s)
    j=1     j=1
∑︁s
   bj =1
j=1
c1= 0

a1j = 0 1 ≤ j ≤ s
aij = 0 j ≥i
(75)

Where the variables mean the folwoing:

The general tableau can be seen in Equation 76.

    |
----|k1---k2---...--kS--
 c1 |a11  a12  ...  a1s
 c  |a    a    ...  a
 .2 | 2.1   2.2       2.s
 ..  | ..    ..        ..
 cs |aS1  aS2  ...  ass
----|b----b----...--b---
    | 1    2        S
(76)

A step from xn to xn+1 =xn+hn(n= 0,1,… ) in the general Runge-Kutta method is defined by Equation 77 where the values a,b,c can be read from the butcher tableau.

       (                 ∑         ) ⎫
 k1 = f xn + c1hn,yn + hn  sj=1 a1,jkj  |||
                         ∑︁s          ||||
 k2 = f(xn+ c2hn,yn + hn    a2,jkj)   ||||
                  ------j=1------    |||
                 ︸      ︷g︷2      ︸    ||||
                         ∑︁s          |||
 k3 = f(xn+ c3hn,yn + hn    a3,jkj)   |⎬
                        j=1          | s stages
                 ︸------︷g︷------︸    |||
 .                       3           ||||
 ..                                   ||||
                         s∑︁           |||
 ks =f (xn+ cshn,yn + hn   as,jkj)   ||||
                 ︸------j︷=︷1-----︸    |||
                        gs           ⎭

y    = y + h  ∑s   b k
 n+1    n   n  j=1  j j
(77)

Some examples for butcher tableaus can be found in Table 1

PIC

Table 1: Butcher Tableaus

The simplest adaptive Runge–Kutta method involves combining Heun’s method, which is order 2, with the Euler method, which is order 1 (also called Heun-Euler 2(1)). Its extended Butcher Tableau can be seen in Equation 78.

   |
0  |
1--|-1--------
   |1/2  1/2
   | 1    0
(78)

8.5 Step-size adaption

8.5.1 Idea

The Idea is to automatically adapt the step-size h. Due to that one needs a new way to define the approximation of the error, which can be done with an Accuracy Goal (ac), which defines how many decimal places (Nachkommastellen) are correct and a precision Goal (pg), which represents the significant digits of the result. The two parameters are considered in the tolerance parameter, ε which can be found in Equation 79.

ε= ε  +|y|ε = 10−ag + |y |10−pg ≥ |e|
    a      r
(79)

Furthermore, one needs a second approximation for the error calculation, with the order ˆ p. The first order approximation p is needed to calculate the step size. Mostly ˆ p=p1. Due to that the butcher tableau is extended by a row (b values) as it can be seen in Table 2.

0 0 0 ··· 0 0
c 2 a2,1 0 ··· 0 0
... ... ... ... 0 ...
cs1 as1,1 as1,2 ··· 0 0
cs as,1 as,2 ··· as,s1 0
b1 b2 ··· bs1 bs
ˆ b1 ˆ b2 ··· ˆ bs1 ˆ bs
Table 2: Butcher Table

The local error can be calculated according to Equation 80.

                                                  ∥                ∥
                           ∑︁s (      )            ∥∥   ∑︁s (      )  ∥∥
en = y(x+ h)− yˆ(x + h)= hn     bj − ˆbj kj ⇒ ∥en∥ = ∥∥hn   bj − ˆbj kj∥∥
                           j=1                        j=1
(80)

Below one can find again a short description of the variables:

To know if the step size is good or not one calculates ∥enε∥. When the current step ∥enε∥-> 1, then the estimation of hn was too optimistic and the step must be repeated with a smaller step size. One also says the current step is Rejected. Otherwise when ∥en∥-
 ε1 the step size is ok and one can proceed. Updating the step size is done according to Equation 81.

          (     )1     (     ) 1
           --ε-- ¯p      ∥en∥- −¯p
hn+1 = hn  ∥e ∥   = hn    ε
             n
(81)

ε=εa+εr∣∣  ∣∣
 yn with ˜ p= min(p, ˆp) + 1 (order of the primary method)

8.5.2 Stability of explicit methods

The global relative error must not diverge, which means it must be limited. Since it is difficult to make a statement about the analysed ODE one uses benchmark equations. One of the most commonly used ones is the Dahlquist model, which can be seen in Equation 82

y ′= Ay  y(0) =1   mit A =ℜ{A}+  jℑ{A} ∈C
(82)

The solution of this equation is y=eℜ{A}x(
 cos(ℑ{A}x + jsin(ℑ{A}x)), which is an oscillation with a exponential amplitude eℜ{A}x and frequency ℑ{A}. The

Example Euler

Y′= −λY; Y(0)= 1;x≥ 0;λ > 0

The exact solution is: Y(x) =eλx Consider Euler’s method: Solation will go to zero iff

yn+1 = yn + hf (xn,yn) |1− h λ|< 1⇒  2-> h > 0
                                     λ
     = yn −h λyn        Euler’s method  is stable for
                                            2-
     = (1− hλ)yn        thisODE  if  0< h < λ

Stability for heun-method (rk-2)
From Equation 77 one knows that yk+1 =yk+h· (1
2 ·k1 +1
2 ·k2)

k1= Ayk(         )   (          )          y0 = y((x0)= y(0)= 1   )
k2= A  yk+ 1hk1  = A yk + Ahyk    =⇒   yk+1 =yk  1+ hA + (hA)21-
                                                ︸------︷︷-----2-︸
                                                F(hA)=F(z), z=hA ∈C
(83)

From that one can somehow derive three cases which are listed below

3 Cases

The stability condition for case one can know be calculated as the following (ℜ{A}< 0) and 1 >|F(z)|=∣        1    2∣
∣1+ hA + 2(hA) ∣⇒−2 <hℜ{A}< 0. since x1,2 =     --------
−b±⎷-(b2−4·a·c)
     2·a=   √︃---------
−1±-((1)2−4·0·12)
     2·1
       2 =−1 ± 1 and therefore the values must be between 0 and -2. The stability polynomial in this exercise was F(z) = 1 +z+z2
 2 (z=hA).

Recursive Formulas
For the stability polynomials in the form: F(z) = 1 +b1k1(z) ++bsks(z) recursive equations exist as can be seen in Equation 84.

                      (                                          )
k1(z) =z,  kj+1(z) = z 1+ aj+1,1k1(z) + aj+1,2k2(z)+ ...+ aj+1,jkj (z)
(84)

8.5.3 Exercise adaptive step size

Solve the initial value problem φ=c(1 εcosφ)2 φ(0) = 0 with c= 1 and ϵ= 0.25 numerically by applying the Heun-Euler 2(1) embedded adaptive method with classical step-size control until 3 proceeding steps are executed. The initial step-size equals 0.001, the accuracy goal (ag) 4 and the precision goal is 4 , either.
Create a table listing values for (         (∥e ∥)−1̃p
 x,y,h,ek, -εk-   ,hnew, state ) containing at least three preceding steps.
According to the exercise, we know the following:

1.
The position at the beginning: x= 0, y= 0
2.
The step size: h= 0.001
3.
The local error can be calculated according to Equation 80. Which says that ek=hn j=1s(bj − ˆbj)kj. From Equation 78 one knows b1 =12, ˆ b1 = 1, b2 =12 and ˆ b2 = 0. Furthermore, one knows from Equation 77 and Equation 78 that k1 =f⎛                         ⎞
⎝x  +  0  ·h  ,y + h  · 0  ⎠
   n  ︸︷︷︸  n  n    n ︸︷︷︸
       c1             a1,j= 1 ·(1 − 1cos0)
    42 =9-
16 and k2 =f⎛                           ⎞

⎝xn+ ︸1︷︷︸·hn,yn+ hn ·︸1︷︷︸·k1 ⎠
      c2             a2,1= 1 ·(                      )
 1 −(14 ·cos(0.001 ·1·196)2 =0.5625. Therefore ek =hn· (b1 ˆ b1) ·k1 + (b2 ˆ b2) ·k2 =0︸.0︷0︷1 ︸hn· (  1-
− 2
︸︷︷︸b1ˆb1 ·9-
16 + 1-
 2
︸︷︷︸(b2ˆb2) ·0.5625) =2.966 · 1011
4.
Since p= 2 (order) and ε can be calculated according to Equation 79 which says: ε=εa+|y|εr = 10ag+|y|10pg= 104 + 0 · 104 = 104. Therefore (∥ek∥)
   ε1̃p =(2.966·10−11)
   10−412 = 1.836 · 103
5.
With Equation 81 one can the finally calculate the new step size which is: hn+1 = hn(∥e ∥)
 -nε--1
¯p = 0.001 ·(2.966·10−11)
    10−412 = 1.836
6.
the new y value can be calculated according to Equation 77 which means for the current scheme 0.001 · (12 ·k1 +12 ·k2) = 0.001 · (12 ·916 +12 ·0.5625) = 0.0005625

x y hn ek (∥ekε∥)1̃p hn+1 state
0 0 0.001 2.966 · 1011 1.836 · 103 1.836 Proceed
0.001 0.0005625 1.836 0.18169 0.02347 0.043 Reject
Table 3: Exercise
8.5.4 Exercise Stability polynomial

Using Theorem 1.3 in the script (p. 29) gain the A-stability polynomial F1(z) = 1 +z+ 2
z2 + 3
z6 for the embedded adaptive method SS3(2) with Butcher tableau.

   |
 0 |
 1 |     1
 2 |     2
 1 |    −11             22             1
-1-|-----6-------------3-------------6----------------------
   |   ⎷-16-            23 ⎷---        16 ⎷ ---     ⎷ 0--
   |1-( 82 − 10)  1-(10 −  82)   1-(28 −  82)  -1(  82− 16)
    72            36            144            48

From Equation 84 one knows that a stability polynomial is of the form F(z) = 1+b1k1(z)+b2k2(z)+b3k3(z)+0k4(z), whereas:

k1(z)= z(          )   (      )
             1-             1-
k2(z)= z 1 + 2k1(z) = z 1 + 2z
                                (         (      ))
k3(z)= z(1+ −1k1(z) + 2k2(z))= z 1− z + 2z 1+ 1-z  = z+ z2 + z3
                                              2

Therefore F(z)= 1+ 1-k1(z)+ 2k2(z)+ 1-k3(z)= 1+ z+ 1-z2+ 1-z3
                   6        3       6              2     6

8.5.5 Stiffness

The stiffnes is dependent on:

Stiffness Detection
Heun euler does not work when we have stiffness situations. Condition: cs1 =cs= 1. Through testing of ∣∣ ∂f(x,y)∣∣
∣h  ∂y  ∣ to the absolute borders of the stability region the stiffness can be detected. Stiffness is present when |h˜λ| is outside or at the border of the stability condition.

Example for explicit runge-kutta

                       ∑︁s                              ∑︁s       )
ks−1= f (x + cs−1h,y+ h    as−1,jkj)  ks =f (x+ csh,y + h   as,jkj   ⇒    ˜λ=  ∥∥ks−-ks−1∥∥-
                       j=1                             j=1                  ∥gs− gs−1∥
                  ︸------︷︷-----︸                 ︸-----︷︷------︸
                        gs−1                              gs

˜λ is an estimation for fy=-∂
∂yf (x,y) for example y=x4 25y4 =f (x,y) and takes the role of ℜ{A} for the stability analysis.

8.5.6 Exercise stiffness detection test

Given the differential equation y=  √︃ -------
−   x2+ y2
︸---︷︷---︸f (x,y) y(0) = 4 carry out a stiffness detection test using the A-stability region and the partial derivative fy at the initial values and step size h= 1. The method is defined by the Butcher tableau below.

     |
0    |
1/2  |1/2
b    |0    1
 ˆ   |
b-----1----0--

The exercise can be solved in two steps:

1.
Get stability region (Equation 84):
From Equation 84 one knows that a stability polynomial is of the form F(z) = 1 +b1k1(z) +b2k2(z) + b3k3(z) + 0k4(z), whereas:
k1(z) = z
         (   1     )    (   1  )
k2(z) = z 1+ --k1(z) = z 1+ --z
             2              2

Therefore F(z)= 1+ 0k (z)+ 1k (z)= 1 + z + 1z2 where (z= h ·A)
                     1       2            2

To calculate the stability region one hast to set the stability polynomial to zero and therefore one gets: x1,2 =   ⎷ --------
−b±--(b2−4·a-·c)
     2·a=   √︃ --2----1-
−1±--((1)1−4·0·2)
      2·2 =−1 ± 1 and therefore the boundary is -2 and 0.

2.
Check stiffness (see subsubsection 8.5.5)
By calculating the partial derivative of √︁ -2---2-
  x + y after y one obtains the following 1 ·1
2 ·---1---
⎷x2+y2- · 2 ·y inserting x= 0 and y= 4 results in 1 A=−1 and h·A=−1 which is inside the A-stability constraint 2 <Ah< 0 and therefore no stiffness is detected.

8.5.7 Van der Pol second-order differential equation

Solve the van-der-Pol ODE-system (  ′                )
  z = v(     )
  v′= µ 1 −z2 v − z z(0) = 1,v(0) =−1, µ= 0.2 numerically by applying the Heun-Euler 2(1) embedded adaptive method with classical step-size control until 3 proceeding steps are executed. The initial step-size equals 0.001, the accuracy goal (ag) 1 and the precision goal is 2. Create a table listing values for (            ∥∥   ⃗e   ∥∥−1/˜p
 t,{z,v},h,ek,∥εa+nεr⃗yn∥    ,hnew, state) containing at least three proceeding steps.

According to the exercise, we know the following:

1.
The position at the beginning: x= 0, z= 0,v=−1
2.
The step size: h= 0.001
3.
The local error can be calculated according to Equation 80. Which says that ek=hn j=1s(b − ˆb )
  j   jkj. From Equation 78 one knows b1 =1
2, ˆ b1 = 1, b2 =1
2 and ˆ b2 = 0. Furthermore, one knows from Equation 77 and Equation 78 that k1 = f⎛                         ⎞

⎝xn + ︸0︷︷︸·hn,yn + hn ·︸0︷︷︸⎠
       c1             a1,j=[         v          ]
    ′   (    2)
  v = µ  1− z  v − z= [         −1         ]
          2
  0.2(1− 1 )·(−1)− 1=[ −1 ]

  −1 and
      ⎛                           ⎞

k2= f ⎝xn +︸︷0︷︸·hn,yn + hn·︸︷1︷︸·k1⎠
            c2              a2,1
    [                                                                   ]
  =                           −1 + 0.001 ·1·(−1)
      0.2(1− (1+ 0.001 ·1·(−1))2)·(−1+ 0.001 ·1 ·(−1)) − (1 + 0.001 ·1·(−1))
    [     −1.001     ]
  =
      −0.9994001998

Therefore ek= 0.001 · (1
2 ·[     ]
  −1
  −1+1
2 ·[                ]
      −1.001
  −0.9994001998) =[                ]
     −5 ·10−1
  2.999001 ·10−1

4.
Sine p = 2 (order) and ε can be calculated according to Equation 79 which says: ε=εa+|y|εr= 10ag+|y|10pg= 101 +[     ]
   1
  −1· 102 =[      ]
  0.11
  0.09. Therefore (    )
 ∥ek∥
  ε1̃p =(    (    ([                ]  (([      ]))))−1
               −5 ·10−1           0.11      2
 max  abs    2.999001 ·10−1  ./    0.09 = 469.04157598235
5.
With Equation 81 one can the finally calculate the new step size which is: hn+1 = hn(∥en∥)
  ε1
¯p = 0.001 · 469.04157598235 =0.46904157598235
6.
the new y value can be calculated according to Equation 77 which means for the current scheme 0.001 · (12 ·k1 +12 ·k2) = 0.001 · (12 ·[ −1 ]

  −1+12 ·[     −1.001     ]

  00.9994001998) =[     0.9989995      ]

  −1.0009997000999

x y hn ek (∥eεk∥)1̃p hn+1 state
0 [1  ]
 −1 0.001 [   −5 ·10−1   ]
 2.999001 ·10−1 469.04157598235 0.46904 Proceed
0.001 [                 ]
     0.9989995
 −1.0009997000999 0.46904157598235 Reject
Table 4: Exercise
           (      )
z′′(t)=    µ 1 −z2     z′−z
          ︸--︷︷--︸
       non- lineardamping
(85)

  ′
z  =z1
z ′= z2
  1′
z2 = z3
  ..
  .
zn −1′= f (x,z,z1,z2,···,zn−1)
(86)

z′= v
      (     )
z′1= µ  1− z2 z1− z
(87)

        −d→f           (       )    ∂f⃗ ∂ ⃗f  ∂⃗f      ∂f⃗
y⃗′′(x) = ---= J⃗f(x,⃗y)· 1,⃗y′(x)tr = (--,----,---,··· ,----)·(1,y1′(x),y′2(x),···,y′m(x) )tr
        dx                        ∂x  ∂︸y1--∂y2︷︷----∂ym︸    ︸--------︷︷′---------︸
                                            ∂⃗f                     =:y (x)
                                          =:∂⃗y=Jf¯(⃗y)
        ∂⃗f   ∂f⃗
      = ---+ ---·⃗y′(x):=f⃗x + ⃗f˙y· ⃗f := ⃗F1
        ∂x   ∂⃗y
(88)

           (                           )
 ⃗          f1(t,z,v)=    (    v)
f (t,z,v) =  f2(t,z,v)=   µ 1 −z2  v− z
(89)

|------------------------|----------------|---------------------------------------------|
|------------------------|--Zeitbereich---|--------------Frequenzbereich----------------|
|Linearity---------------|c1x1(t)+-c2x2(t)-|--c1X1(f)+-c2X2(f)---|--c1X1(jω)+-c2X2(jω)---|
|Faltung-----------------|---x(t)∗-y(t)---|-----X(f-)·Y(f)------|-----X(jω)-·Y(jω)------|
|Multiplikation          |   x(t)·y(t)    |     X(f)∗ Y(f )     |    1-X(jω) ∗Y(j ω)    |
|------------------------|----------------|----------−j2πft0----|----2π-------−jωt0-----|
|Verschiebung------------|---jx2(πtf−t-t0)----|----X(f)(·e----)------|-----X(j(ω)-·e----)-----|
|Modulation--------------|--e----0-·x(t)---|------X-f-−-f0-------|-----X--j[ω−-ω0]-------|
|lineareGewichtung       |     t·x(t)     |    − j12πddf-X(f)     |     − d(djω)X(jω)      |
|------------------------|-----d----------|---------------------|-----------------------|
|Di-fferentiation----------|--∫-tdtx(t)-----|--1--j2πf-·X1(f-)-----|--1---j-ω-·X(j-ω)-------|
|Integration-------------|---−∞-x(τ)d-τ---|j2πf X(f-)+-2(X()0)δ(f)|--jωX(jω)(+-πX)(0)δ(f)--|
|                        |                |      -1    f-       |     1-   jω-          |
|Skalierung--------------|-----x(at)------|------|a|·X--a--------|-----|a|·X--a--δ(ω)-----|
|Zeitinversion------------|-----x(−t)------|-------X(−f-)--------|--------X(−jω)---------|
|konj. komplex           |     x∗(t)      |       X∗(−f )       |       X∗(−j ω)        |
|Real-part---------------|-----x--(t)------|-------X-∗(f)--------|--------X-∗(j-ω)--------|
|------------------------|------R---------|--------g------------|---------g-------------|
|Imaginary--part---------|-----jxI(t)-----|-------Xu∗(f)--------|-------Xu-∗(jω)--------|
|duality-----------------|∫-X(t)[X(jt)]------∫----x(−f-)-----------∫----2πx(-−ω)--------|
|Parsevalsches Theorem   | ∞−∞ x(t)·y∗(t)dt =  ∞−∞ X(f)·Y∗(f )df  = 1- ∞−∞ X(jω) ·Y∗(jω)d ω |
-----------------------------------------------------------------2π----------------------

|Nr.--|--------x(t)----------|---------X(f-)----------|----------X(jω)----------|
|-1---|--------δ(t)----------|-----------1------------|------------1------------|
| 2   |          1           |         δ(f )          |         2πδ(ω)          |
|-3---|--------U-(t)---------|--------1IH-1(f)--------|--------2πU-2π(ω)--------|
|-----|---------T------------|------1|T|--T--1--------|--------|T|--T---1--------|
|-4---|---------ε(t)----------|------2δ(f)+-j2πf-------|-------π-δ(ω)+-jω--------|
| 5   |       sgn(t)         |          -1-           |           -2-           |
|-----|----------1-----------|----------jπf-----------|-----------jω------------|
|-6---|----(-)--πt-----------|-------−j-sgn(f)--------|--------−j-sgn((ω))-------|
|-7---|rect-tT---((T=-w)idth)---|------|T|·si(πTf-)-------|-------|T|·si(T2ω--)------|
| 8   |       si πt-         |      |T |·rect(Tf)       |      |T |·rect -Tω        |
|-----|----------(Tt)---------|-----------2------------|------------2(2Tπ-)-------|
|-9---|--------Λ2(-Tt)--------|------|T-|·si-(πTf--)------|-------|T-|·si(T2ω-)-------|
|-10--|-------si-π-T---------|-------|T-|(·Λ(Tf-))--------|-------|T-|·Λ-2πω---------|
|-11--|-------ej2πf0t---------|-------δ-f-−-f0---------|-------2πδ(ω-−-ω0)-------|
| 12  |     cos(2πf0t)       | 1[δ(f + f0)+ δ (f − f0)]| π[δ(ω + ω0)+ δ (ω − ω0)] |
|-----|--------(-----)-------|12[--(-----)---(-----)]-|-------------------------|
|-13--|-----sin-2πf0t--------|2j-δ--f +⎷-f0-−2δ2f-−-f0-|πj-[δ(ω-+ ω⎷0)-−-δ2(ω-−-ω0)]
| 14  |       e −a2t2         |       --πe−πaf2-        |         -πe− ω4a2-        |
|-----|-----------|t|---------|--------a---------------|---------a---------------|
--15-----------e−-T-------------------1+(22TπTf)2------------------1+2(TTω)2----------

Where

         sin-πx-
sinc(x) =  πx

      sinx-
si(x)=   x

Common angles

Degrees 0 30 45 60 90
Radians 0 π
--
6 π
--
4 π
--
3 π
--
2
sin 𝜃 0 1-
2 ⎷ --
--2
 2 ⎷ --
--3
 2 1
cos 𝜃 1 ⎷--
-3-
2 ⎷ --
--2
 2 1-
2 0
tan 𝜃 0 ⎷--
-3-
3 1 ⎷3--

Reciprocal functions

cotx =  1
-----
tan x
cscx =--1--
sinx
secx =--1--
cos x

Even/odd

sin(x) =−sinx
cos(x) = cosx
tan(x) =−tanx

Pythagorean identities

sin2x+ cos2x = 1
1 + tan2x = sec2x
1 + cot2x = csc2x

Cofunction identities

sin(π-
2 x) = cosx
cos(π-
2 x) = sinx
tan(π-
2 x) = cotx
cot(π-
 2 x) = tanx
sec(π
--
 2 x) = cscx
csc(π
--
 2 x) = secx

Sum and difference of angles

sin(x+y) = sinx cosy+ cosx siny
sin(xy) = sinx cosy cosx siny
cos(x+y) = cosx cosy sinx siny
cos(xy) = cosx cosy+ sinx siny
tan(x+y) = tan x+ tan y
-------------
1− tanx tany
tan(xy) =-tan-x−-tan-y-
1+ tanx tany

Double angles

sin(2x) = 2sinx cosx
cos(2x) = cos2x sin2x
= 2cos2x1
= 1 2sin2x
tan(2x) =-2tan-x--
1− tan2 x

Half angles

sinx-
 2 √︂ ---------
  1-−cos-x
      2
cosx-
2 √︂ ---------
  1-+ cos-x
      2
tanx-
2 =1−-cosx-
 sinx
= sinx
--------
1+ cosx

Power reducing formulas

sin2x =1− cos 2x
----2----
cos2x =1+ cos 2x
---------
    2
tan2x =1−-cos-2x
1+ cos 2x

Product to sum

sinx siny =1-
2[cos(xy) cos(x+y) ]
cosx cosy =1-
2[cos(xy) + cos(x+y) ]
sinx cosy =1
--
2[sin(x+y) + sin(xy)]
tanx tany =tan x+ tany
------------
cot x+ coty
tanx coty =tan-x+-coty-
cotx + tany

Sum to product

sinx+ siny = 2sin(x-+ y
  2)cos(x-−y-
  2)
sinx siny = 2cos(x-+-y
  2)sin(x-−y-
  2)
cosx+ cosy = 2cos(x-+-y
  2)cos(x-−-y
  2)
cosx cosy =−2sin(x+ y
-----
 2)sin(x − y
-----
  2 )
tanx+ tany =sin(x-+-y)-
cosx cosy
tanx tany =sin(x − y)
cosx-cosy-

9 Fromulas

9.1 Differentation Formulas

1. d(u±v)
  dx=du-
 dx±dv-
dx 2. d(k·u)
 dx=k·du-
dx (k konstant ) 3. d(u·v)
-dx--=-dudx·v+u·-dvdx 4. d(u/v)
  dx=du
dxv−u2-dv
   v 5. dz-
 dx=dz-
 dy·dy-
 dx falls z=f (y) und y=g(x) 6. -d
dx  n
 (x ) =nxn1 7. ddx (ex) = ex 8.  d
dx (ax) =axlna (a> 0) 9. ddx(lnx) =1x 10. -d
dx(sinx) = cosx 11. -d
dx(cosx) =−sinx 12. -d
dx(tanx) =--1--
cos2x 13. -d
dx(arcsinx) =---1--
⎷ 1−x2 14. -d
dx(arctanx) =-1--
1+x2

9.2 Integration Formulas

1. ab(u±v)dx= abu dx± abv dx 2. abk·u dx=k abu dx (k konstant ) 3. x=abf (g(x))g(x)dx= w=g(a)g(b)f (w)dw 4. abu· dv
-dx dx=(u ·v)|ab abdu
dx-·v dx

9.3 Table of Indefinite Integrals

9.3.1 Basic Functions

1. xn dx=-1--
n+1xn+1 +C,n≠1 2. 1
x dx= ln|x|+C 3. ax dx=ln1a-ax+C,a> 0 4. lnx dx=x lnxx+C 5. sinx dx=−cosx+C 6. cosx dx= sinx+C 7. tanx dx=−ln|cosx|+C

9.3.2 Products of ex and cox x and sin x

8.
eaxsin(bx)dx=  1
-------
a2+ b2eax(a sin(bx) b cos(bx)) +C
9.
eaxcos(bx)dx=---1---
a2+ b2eax(a cos(bx) +b sin(bx)) +C
10.
sin(ax)sin(bx)dx=   1
--2---2
b  − a(a cos(ax)sin(bx) b sin(ax)cos(bx)) +C,a≠b
11.
cos(ax)cos(bx)dx=---1---
b2− a2(b cos(ax)sin(bx) a sin(ax)cos(bx)) +C,a≠b
12.
sin(ax)cos(bx)dx=--1----
b2− a2(b sin(ax)sin(bx) +a cos(ax)cos(bx)) +C,a≠b

9.3.3 Product of Polynomial p(x) with lnx,ex,cosx,sinx

13.
xnlnx dx=--1--
n + 1xn+1 lnx--1-----
(n + 1)2xn+1 +C,n≠1
14.
p(x)eax dx=1
--
ap(x)eax1
--
ap(x)eax dx=1
--
ap(x)eax 1
-2-
ap(x)eax+1
-3-
ap′′(x)eax
(Sign alternate: +−+−+− )
15.
p(x)sin(ax)dx=−1
--
ap(x)cos(ax) +1
--
ap(x)cos(ax)dx =−1
--
ap(x)cos(ax) +1
-2-
ap(x)sin(ax) + 1
--3
ap′′(x)cos(ax)
(Sign alternate in pairs after first term: −++−++ )
16.
p(x)cos(ax)dx=1
--
ap(x)sin(ax) 1
--
ap(x)sin(ax)dx =1
--
ap(x)sin(ax) + 1
---
a2p(x)cos(ax)  1
---
a3p′′(x)sin(ax)
(Signs alternate in pairs: ++−−++− )

9.4 Taylor Polynomial/Series

Development of f around a

                  ′           f′′(a)-      2  f′′′(a)-      3
    f(x)≈ f (a) + f (a)(x − a)+   2! (x −a)  +   3!  (x − a) + ...
                           ′′         ′′′
f (a+ h)≈ f (a) + f′(a)h + f--(a)-h2+ f--(a)h3 + ...
                           2!        3!

in which k! = 1 · 2 · 3 ··k.

9.4.1 Important Taylor Series

      x          1- 2  1- 3
     e  = 1+ x + 2!x  + 3!x + ...
              1 2   1  4  1  6
   cosx = 1− --x  + -x  − --x + ...
             2!     4!    6!
   sinx = x− -1x3 + 1-x5− 1-x7+ ...
             3!     5!    7!
   --1--          2   3   4
   1− x = 1+ x + x + x + x + ...(Geometric Series)
      p           p(p − 1) 2  p(p − 1)(p− 2)  3
(1 + x) = 1+ px + --------x + --------------x  + ...
               2   3 2! 4           3!
ln(1 + x)= x− x--+ x--− x--+ ...
              2    3   4

The last three only converge for |x|< 1.

9.5 Determinant

9.5.1 Sarrus

   03   2−1  12    03   2−1
0-10-6SA461ud2bd4trathce−ts4ethp1ersoedpurc4otdsuc−t4s

Figure 7: sarrus

9.6 Matrix

9.6.1 Transpose
                          ⌈       ⎤
    [          ]            a  d
A =   a  b  c        AT = ⌊ b  e  ⎦
      d  e  f   2×3         c  f
                                   3×2
(90)

9.6.2 Multiplication

   ab11   ab12  ......  ab1q
   c1111  c1122  ...   c11qp
⎛⎞
⎛⎛⎞⎞ abc222111  cab222222  .........  ac2b2qp2;q
⎜⎜⎜⎜⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎟⎟⎟⎟⎟ ...    ...     .    ...
⎜⎜⎜⎜⎜⎜⎟⎟⎟⎟⎟⎟ ......    ......    ........  ......
⎜⎜⎜⎜⎟⎟⎟⎟
⎜⎜A⎜⎜B⎜⎜aaa++C⎟⎟⎟⎟⎟⎟222::12p.=a nb pc.npn×××.+A1r1r1bboo12b×ww22pssc2Babn:nppq222ncc rooo.ll.w.u.u.s.m.m.n.nqsscaconbnlppquqmns
⎜⎜⎜⎜⎜⎜⎟⎟⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎟⎟⎟⎟⎟
⎜⎝⎜⎜⎜⎝⎟⎠⎟⎟⎟⎠
⎝⎠

Figure 8: Matrix Multiplication